Jan 22 09:10:29 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 09:10:29 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:29 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:10:30 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 09:10:30 crc kubenswrapper[4892]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:10:30 crc kubenswrapper[4892]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 09:10:30 crc kubenswrapper[4892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:10:30 crc kubenswrapper[4892]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:10:30 crc kubenswrapper[4892]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 09:10:30 crc kubenswrapper[4892]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.486462 4892 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494132 4892 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494219 4892 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494231 4892 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494242 4892 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494252 4892 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494264 4892 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494275 4892 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494312 4892 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494321 4892 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494329 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494338 4892 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494346 4892 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494355 4892 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494362 4892 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494370 4892 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494381 4892 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494389 4892 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494397 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494406 4892 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494414 4892 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494422 4892 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494434 4892 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494445 4892 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494456 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494464 4892 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494473 4892 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494484 4892 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494497 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494505 4892 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494513 4892 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494521 4892 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494530 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494539 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494547 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494555 4892 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494563 4892 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494572 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494581 4892 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494589 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494597 4892 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494606 4892 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494615 4892 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494624 4892 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494633 4892 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494642 4892 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494650 4892 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494662 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494670 4892 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494679 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494687 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494695 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494702 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494710 4892 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494720 4892 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494728 4892 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494735 4892 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494744 4892 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494752 4892 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494759 4892 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494767 4892 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494775 4892 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494782 4892 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494797 4892 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494807 4892 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494816 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494825 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494835 4892 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494844 4892 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494855 4892 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494866 4892 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.494874 4892 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495487 4892 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495517 4892 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495567 4892 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495579 4892 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495594 4892 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495605 4892 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495619 4892 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495632 4892 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495642 4892 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495652 4892 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495662 4892 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495675 4892 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495685 4892 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495695 4892 flags.go:64] FLAG: --cgroup-root="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495704 4892 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495713 4892 flags.go:64] FLAG: --client-ca-file="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495723 4892 flags.go:64] FLAG: --cloud-config="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495733 4892 flags.go:64] FLAG: --cloud-provider="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495743 4892 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495783 4892 flags.go:64] FLAG: --cluster-domain="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495793 4892 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495803 4892 flags.go:64] FLAG: --config-dir="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495812 4892 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495822 4892 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495835 4892 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495845 4892 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495855 4892 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495865 4892 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495875 4892 flags.go:64] FLAG: --contention-profiling="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495885 4892 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495895 4892 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495905 4892 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495915 4892 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495930 4892 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495939 4892 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495949 4892 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495958 4892 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495968 4892 flags.go:64] FLAG: --enable-server="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495978 4892 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.495992 4892 flags.go:64] FLAG: --event-burst="100" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496002 4892 flags.go:64] FLAG: --event-qps="50" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496011 4892 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496020 4892 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496030 4892 flags.go:64] FLAG: --eviction-hard="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496042 4892 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496051 4892 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496061 4892 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496071 4892 flags.go:64] FLAG: --eviction-soft="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496081 4892 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496090 4892 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496100 4892 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496109 4892 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496119 4892 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496128 4892 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496137 4892 flags.go:64] FLAG: --feature-gates="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496149 4892 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496158 4892 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496168 4892 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496178 4892 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496188 4892 flags.go:64] FLAG: --healthz-port="10248" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496198 4892 flags.go:64] FLAG: --help="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496208 4892 flags.go:64] FLAG: --hostname-override="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496218 4892 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496228 4892 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496238 4892 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496248 4892 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496257 4892 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496267 4892 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496276 4892 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496312 4892 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496322 4892 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496332 4892 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496342 4892 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496351 4892 flags.go:64] FLAG: --kube-reserved="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496360 4892 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496370 4892 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496380 4892 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496389 4892 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496398 4892 flags.go:64] FLAG: --lock-file="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496407 4892 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496417 4892 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496427 4892 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496444 4892 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496455 4892 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496464 4892 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496474 4892 flags.go:64] FLAG: --logging-format="text" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496484 4892 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496493 4892 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496503 4892 flags.go:64] FLAG: --manifest-url="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496512 4892 flags.go:64] FLAG: --manifest-url-header="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496528 4892 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496538 4892 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496550 4892 flags.go:64] FLAG: --max-pods="110" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496560 4892 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496570 4892 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496579 4892 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496588 4892 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496598 4892 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496607 4892 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496617 4892 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496648 4892 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496658 4892 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496667 4892 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496676 4892 flags.go:64] FLAG: --pod-cidr="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496686 4892 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496705 4892 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496715 4892 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496725 4892 flags.go:64] FLAG: --pods-per-core="0" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496734 4892 flags.go:64] FLAG: --port="10250" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496744 4892 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496754 4892 flags.go:64] FLAG: --provider-id="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496764 4892 flags.go:64] FLAG: --qos-reserved="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496773 4892 flags.go:64] FLAG: --read-only-port="10255" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496783 4892 flags.go:64] FLAG: --register-node="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496793 4892 flags.go:64] FLAG: --register-schedulable="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496802 4892 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496821 4892 flags.go:64] FLAG: --registry-burst="10" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496831 4892 flags.go:64] FLAG: --registry-qps="5" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496840 4892 flags.go:64] FLAG: --reserved-cpus="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496850 4892 flags.go:64] FLAG: --reserved-memory="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496863 4892 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496873 4892 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496883 4892 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496892 4892 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496901 4892 flags.go:64] FLAG: --runonce="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496910 4892 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496920 4892 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496931 4892 flags.go:64] FLAG: --seccomp-default="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496940 4892 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496950 4892 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496959 4892 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496970 4892 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496980 4892 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496989 4892 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.496998 4892 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497008 4892 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497028 4892 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497038 4892 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497048 4892 flags.go:64] FLAG: --system-cgroups="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497057 4892 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497073 4892 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497082 4892 flags.go:64] FLAG: --tls-cert-file="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497092 4892 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497105 4892 flags.go:64] FLAG: --tls-min-version="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497114 4892 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497123 4892 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497132 4892 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497142 4892 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497151 4892 flags.go:64] FLAG: --v="2" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497164 4892 flags.go:64] FLAG: --version="false" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497177 4892 flags.go:64] FLAG: --vmodule="" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497190 4892 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.497200 4892 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497461 4892 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497474 4892 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497487 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497496 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497504 4892 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497513 4892 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497531 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497540 4892 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497548 4892 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497556 4892 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497564 4892 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497571 4892 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497580 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497587 4892 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497595 4892 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497606 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497614 4892 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497622 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497629 4892 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497638 4892 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497646 4892 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497654 4892 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497662 4892 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497670 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497681 4892 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497692 4892 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497701 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497709 4892 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497719 4892 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497728 4892 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497736 4892 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497746 4892 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497755 4892 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497763 4892 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497771 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497779 4892 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497787 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497795 4892 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497807 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497816 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497823 4892 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497831 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497839 4892 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497847 4892 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497857 4892 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497866 4892 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497874 4892 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497885 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497892 4892 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497900 4892 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497908 4892 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497915 4892 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497923 4892 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497931 4892 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497938 4892 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497946 4892 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497954 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497962 4892 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497969 4892 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497977 4892 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497984 4892 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.497992 4892 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498000 4892 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498008 4892 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498015 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498023 4892 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498031 4892 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498039 4892 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498047 4892 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498057 4892 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.498070 4892 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.498097 4892 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.511884 4892 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.511945 4892 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512205 4892 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512229 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512238 4892 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512247 4892 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512257 4892 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512266 4892 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512275 4892 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512310 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512319 4892 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512328 4892 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512337 4892 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512345 4892 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512353 4892 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512361 4892 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512370 4892 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512379 4892 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512387 4892 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512395 4892 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512404 4892 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512412 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512421 4892 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512429 4892 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512437 4892 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512445 4892 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512453 4892 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512462 4892 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512470 4892 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512478 4892 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512486 4892 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512497 4892 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512507 4892 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512516 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512523 4892 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512531 4892 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512539 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512547 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512555 4892 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512565 4892 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512577 4892 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512586 4892 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512595 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512603 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512612 4892 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512620 4892 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512628 4892 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512639 4892 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512650 4892 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512659 4892 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512668 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512676 4892 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512685 4892 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512694 4892 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512702 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512711 4892 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512719 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512727 4892 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512735 4892 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512743 4892 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512751 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512759 4892 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512767 4892 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512775 4892 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512787 4892 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512799 4892 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512810 4892 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512821 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512831 4892 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512841 4892 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512849 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512857 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.512866 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.512880 4892 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513113 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513128 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513137 4892 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513146 4892 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513155 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513164 4892 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513172 4892 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513180 4892 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513189 4892 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513198 4892 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513206 4892 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513214 4892 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513222 4892 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513230 4892 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513238 4892 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513248 4892 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513258 4892 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513267 4892 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513275 4892 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513312 4892 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513321 4892 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513329 4892 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513338 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513346 4892 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513354 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513363 4892 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513371 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513379 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513387 4892 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513396 4892 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513405 4892 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513413 4892 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513421 4892 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513429 4892 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513437 4892 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513445 4892 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513456 4892 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513467 4892 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513477 4892 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513489 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513500 4892 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513512 4892 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513523 4892 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513534 4892 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513544 4892 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513554 4892 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513564 4892 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513573 4892 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513581 4892 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513590 4892 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513598 4892 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513606 4892 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513617 4892 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513627 4892 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513637 4892 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513645 4892 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513653 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513662 4892 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513670 4892 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513680 4892 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513690 4892 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513698 4892 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513708 4892 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513717 4892 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513726 4892 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513735 4892 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513744 4892 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513752 4892 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513761 4892 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513769 4892 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:10:30 crc kubenswrapper[4892]: W0122 09:10:30.513778 4892 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.513792 4892 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.514381 4892 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.520556 4892 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.520853 4892 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.521846 4892 server.go:997] "Starting client certificate rotation" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.521872 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.522083 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 16:06:31.587917191 +0000 UTC Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.522194 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.530053 4892 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:10:30 crc kubenswrapper[4892]: E0122 09:10:30.533022 4892 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.535047 4892 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.546652 4892 log.go:25] "Validated CRI v1 runtime API" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.575787 4892 log.go:25] "Validated CRI v1 image API" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.578689 4892 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.581759 4892 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-09-01-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.581803 4892 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.603030 4892 manager.go:217] Machine: {Timestamp:2026-01-22 09:10:30.601754564 +0000 UTC m=+0.445833667 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:61509b40-08df-4430-847e-d3a8d2681f9e BootID:1c930485-9734-4304-ad2c-ecfe6f90ae0f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f3:48:ff Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f3:48:ff Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2d:4f:df Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:79:f8:50 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6c:dc:1b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3e:13:a3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:aa:29:c0:3c:e7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:d1:05:e8:e8:90 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.603307 4892 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.603525 4892 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.604989 4892 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.605210 4892 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.605258 4892 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.605556 4892 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.605571 4892 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.605795 4892 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.605859 4892 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.606178 4892 state_mem.go:36] "Initialized new in-memory state store" Jan 22 09:10:30 crc kubenswrapper[4892]: I0122 09:10:30.606278 4892 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.346762 4892 kubelet.go:418] "Attempting to sync node with API server" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.346811 4892 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.346852 4892 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.346872 4892 kubelet.go:324] "Adding apiserver pod source" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.346888 4892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.348871 4892 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.349515 4892 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.349965 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.350067 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.350045 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.350130 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.350527 4892 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351098 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351126 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351136 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351145 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351160 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351171 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351183 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351198 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351209 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351218 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351230 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351239 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351467 4892 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.351918 4892 server.go:1280] "Started kubelet" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.352168 4892 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.352145 4892 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.352901 4892 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 09:10:31 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.353715 4892 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.354507 4892 server.go:460] "Adding debug handlers to kubelet server" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.355685 4892 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d028c8f73e4cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:10:31.351887055 +0000 UTC m=+1.195966128,LastTimestamp:2026-01-22 09:10:31.351887055 +0000 UTC m=+1.195966128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.359630 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.359686 4892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.359762 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:14:50.092852451 +0000 UTC Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.359798 4892 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.359826 4892 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.359831 4892 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.359989 4892 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.360726 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="200ms" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.362088 4892 factory.go:55] Registering systemd factory Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.362116 4892 factory.go:221] Registration of the systemd container factory successfully Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.362276 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.362380 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.363158 4892 factory.go:153] Registering CRI-O factory Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.363180 4892 factory.go:221] Registration of the crio container factory successfully Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.363273 4892 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.363311 4892 factory.go:103] Registering Raw factory Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.363328 4892 manager.go:1196] Started watching for new ooms in manager Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.364574 4892 manager.go:319] Starting recovery of all containers Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.371824 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.371932 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.371956 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.371999 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372018 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372037 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372053 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372070 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372086 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372098 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372110 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372124 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372137 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372152 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372165 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372180 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372194 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372207 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372222 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372238 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372251 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372264 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372324 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372341 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372354 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372367 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372383 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372400 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372413 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372461 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372477 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372508 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372522 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372540 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372553 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372566 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372579 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372592 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372605 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372618 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372630 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372641 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372653 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372665 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372679 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372693 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372705 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372719 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372731 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372745 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.372759 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374050 4892 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374083 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374106 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374122 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374136 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374149 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374163 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374176 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374187 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374199 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374212 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374225 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374239 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374251 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374264 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374276 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374307 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374318 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374331 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374344 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374356 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374370 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374381 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374393 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374404 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374418 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374465 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374478 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374493 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374504 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374515 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374526 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374537 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374549 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374563 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374575 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374586 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374597 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374609 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374620 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374634 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374646 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374656 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374668 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374679 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374691 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374702 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374714 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374725 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374735 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374746 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374757 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374767 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374779 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374795 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374808 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374822 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374836 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374848 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374860 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374875 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374888 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374901 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374913 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374925 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374980 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.374994 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375006 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375018 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375035 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375047 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375058 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375069 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375081 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375092 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375103 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375121 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375131 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375142 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375157 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375168 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375182 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375193 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375206 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375218 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375229 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375240 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375250 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375266 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375277 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375306 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375319 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375330 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375341 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375352 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375367 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375378 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375388 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375399 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375410 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375421 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375432 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375443 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375453 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375467 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375478 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375489 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375503 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375516 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375531 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375542 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375553 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375564 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375575 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375589 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375601 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375613 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375625 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375637 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375651 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375664 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375676 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375688 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375699 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375712 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375723 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375734 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375745 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375758 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375770 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375781 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375793 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375804 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375815 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375828 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375838 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375852 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375863 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375875 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375885 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375899 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375910 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375921 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375938 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375948 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375959 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375984 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.375995 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376008 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376018 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376028 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376039 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376052 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376063 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376078 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376090 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376102 4892 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376115 4892 reconstruct.go:97] "Volume reconstruction finished" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.376124 4892 reconciler.go:26] "Reconciler: start to sync state" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.381104 4892 manager.go:324] Recovery completed Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.391552 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.393150 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.393179 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.393189 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.393970 4892 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.394111 4892 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.394239 4892 state_mem.go:36] "Initialized new in-memory state store" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.405430 4892 policy_none.go:49] "None policy: Start" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.407032 4892 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.407064 4892 state_mem.go:35] "Initializing new in-memory state store" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.415242 4892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.417318 4892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.417359 4892 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.417394 4892 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.417499 4892 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.418516 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.418626 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.460696 4892 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.471520 4892 manager.go:334] "Starting Device Plugin manager" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.471740 4892 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.471752 4892 server.go:79] "Starting device plugin registration server" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.472344 4892 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.472365 4892 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.472624 4892 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.472768 4892 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.472781 4892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.479211 4892 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.517989 4892 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.518161 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.520350 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.520397 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.520408 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.520568 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.520795 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.520868 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.521674 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.521721 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.521733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.521924 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.522047 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.522085 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.522085 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.522134 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.522154 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523147 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523162 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523194 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523378 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523351 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523468 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523586 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523802 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.523879 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524212 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524239 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524253 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524396 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524492 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524520 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524923 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524947 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524958 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.524989 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525006 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525015 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525088 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525108 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525119 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525197 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525222 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525840 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525877 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.525889 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.562319 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="400ms" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.573121 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.574315 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.574427 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.574498 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.574588 4892 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.575095 4892 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578240 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578267 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578308 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578326 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578343 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578357 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578372 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578431 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578500 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578569 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578610 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578635 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578737 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578794 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.578831 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680496 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680581 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680623 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680657 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680689 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680737 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680765 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680809 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680777 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680848 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680846 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680857 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680892 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680878 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680895 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680966 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680985 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681003 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681022 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681041 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680927 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681081 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681111 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.680903 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681159 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681160 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681060 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681181 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.681195 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.776240 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.777833 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.777884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.777899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.777932 4892 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.778507 4892 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.860135 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.867357 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.890196 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.900012 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: I0122 09:10:31.907026 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.908382 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f4e6bbe2f57ddaf641263b0a02715350d743518b2eb9ddb6bbfec1395230df04 WatchSource:0}: Error finding container f4e6bbe2f57ddaf641263b0a02715350d743518b2eb9ddb6bbfec1395230df04: Status 404 returned error can't find the container with id f4e6bbe2f57ddaf641263b0a02715350d743518b2eb9ddb6bbfec1395230df04 Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.910459 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-98fa280da6bfad0c76330d83e3e8e2131c80c6a565bfb79dfb9bdb7393f6d8e9 WatchSource:0}: Error finding container 98fa280da6bfad0c76330d83e3e8e2131c80c6a565bfb79dfb9bdb7393f6d8e9: Status 404 returned error can't find the container with id 98fa280da6bfad0c76330d83e3e8e2131c80c6a565bfb79dfb9bdb7393f6d8e9 Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.919168 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-74fd989fbf263a1e48b0b1eacedfd78ad34175df5b003d872374aa57cb2f4387 WatchSource:0}: Error finding container 74fd989fbf263a1e48b0b1eacedfd78ad34175df5b003d872374aa57cb2f4387: Status 404 returned error can't find the container with id 74fd989fbf263a1e48b0b1eacedfd78ad34175df5b003d872374aa57cb2f4387 Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.921140 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b2223d6f9afca60f1f25afab22f9a48f09bd9b9fbd260082ffdd9cc619702bcd WatchSource:0}: Error finding container b2223d6f9afca60f1f25afab22f9a48f09bd9b9fbd260082ffdd9cc619702bcd: Status 404 returned error can't find the container with id b2223d6f9afca60f1f25afab22f9a48f09bd9b9fbd260082ffdd9cc619702bcd Jan 22 09:10:31 crc kubenswrapper[4892]: W0122 09:10:31.927158 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a913c67fa9011968474e7af5393a09160dd8fc876844d2b360c4ed6123c66e25 WatchSource:0}: Error finding container a913c67fa9011968474e7af5393a09160dd8fc876844d2b360c4ed6123c66e25: Status 404 returned error can't find the container with id a913c67fa9011968474e7af5393a09160dd8fc876844d2b360c4ed6123c66e25 Jan 22 09:10:31 crc kubenswrapper[4892]: E0122 09:10:31.963909 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="800ms" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.179596 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.180999 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.181042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.181058 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.181086 4892 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.181587 4892 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 22 09:10:32 crc kubenswrapper[4892]: W0122 09:10:32.190460 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.190541 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.354718 4892 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.360796 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:18:28.967278486 +0000 UTC Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.422807 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869" exitCode=0 Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.422875 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.423014 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98fa280da6bfad0c76330d83e3e8e2131c80c6a565bfb79dfb9bdb7393f6d8e9"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.423137 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.424209 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.424241 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.424251 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425022 4892 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="23714f92b0c71e126b7a1182f93e3e64b5cd6171144b9f481653bde97c3a97db" exitCode=0 Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425077 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"23714f92b0c71e126b7a1182f93e3e64b5cd6171144b9f481653bde97c3a97db"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425104 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a913c67fa9011968474e7af5393a09160dd8fc876844d2b360c4ed6123c66e25"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425163 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425624 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425942 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425963 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.425970 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.426526 4892 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ada7fe82bc11c5eb9546fd001120bf2f9df9d76c8f14b58c36de480f75c4a855" exitCode=0 Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.426562 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.426577 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.426587 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.426571 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ada7fe82bc11c5eb9546fd001120bf2f9df9d76c8f14b58c36de480f75c4a855"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.427199 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b2223d6f9afca60f1f25afab22f9a48f09bd9b9fbd260082ffdd9cc619702bcd"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.427470 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.428561 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.428604 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.428616 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.432091 4892 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d" exitCode=0 Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.432207 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.432253 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"74fd989fbf263a1e48b0b1eacedfd78ad34175df5b003d872374aa57cb2f4387"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.432719 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.433764 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.433795 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.433804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.434152 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283"} Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.434191 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f4e6bbe2f57ddaf641263b0a02715350d743518b2eb9ddb6bbfec1395230df04"} Jan 22 09:10:32 crc kubenswrapper[4892]: W0122 09:10:32.516515 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.516621 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:32 crc kubenswrapper[4892]: W0122 09:10:32.608770 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.608849 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.645321 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.646561 4892 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:32 crc kubenswrapper[4892]: W0122 09:10:32.707336 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.707425 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.765006 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="1.6s" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.982013 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.983571 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.983605 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.983618 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:32 crc kubenswrapper[4892]: I0122 09:10:32.983643 4892 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:10:32 crc kubenswrapper[4892]: E0122 09:10:32.984036 4892 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.354932 4892 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.361638 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:23:28.82405485 +0000 UTC Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.438711 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.438778 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.438793 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.438808 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.439762 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.439800 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.439810 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.442060 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.442092 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.442108 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.442115 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.442977 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.443012 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.443028 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.445644 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.445705 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.445722 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.445734 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.445746 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.445889 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.446606 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.446639 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.446654 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.447070 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ae3c7d604a49373e282a949d200d60c39a1abb9b46834d4a4746ca09448a1ddb"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.447186 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.448180 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.448214 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.448227 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.448725 4892 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9bdba653f2275486c746db16c53f2e17b427137a42ccd56a56b2b1c7306a0bd8" exitCode=0 Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.448759 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9bdba653f2275486c746db16c53f2e17b427137a42ccd56a56b2b1c7306a0bd8"} Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.448875 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.449564 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.449595 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.449608 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:33 crc kubenswrapper[4892]: I0122 09:10:33.930993 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.361784 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:02:39.665775353 +0000 UTC Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452517 4892 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a6e1e1ad7d5ef06201f297425120d6aefc9d0beac0c45d152b80158ca6c5177f" exitCode=0 Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452572 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a6e1e1ad7d5ef06201f297425120d6aefc9d0beac0c45d152b80158ca6c5177f"} Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452625 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452655 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452655 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452841 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452841 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.452675 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.453587 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.453619 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.453630 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454106 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454132 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454144 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454173 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454189 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454198 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454172 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454228 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.454238 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.584672 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.585565 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.585690 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.585715 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.585739 4892 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:10:34 crc kubenswrapper[4892]: I0122 09:10:34.871677 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.362340 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:22:32.72560882 +0000 UTC Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459764 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e6db8a6876d8f939c4ce7c6bd05ab7463ea494b657a2672911f552f68e6a94fe"} Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459850 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1af229d025c2158dd87dc4c0867edcdd4e306f4ae2f294e5336c079db7497f83"} Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459873 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"194c4867d01d06b2227a5052123c7ff4b37bacab7e003b5a263796a99893022e"} Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459890 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74e228c6a59f016f5d7cfd6317dda0268d2dd6f4ad77a83d05ca9e17667063cc"} Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459893 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459954 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.459907 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6f9c260b10401b80ae91d05b36efbddee2b1e1c9251e63430d9617932b801a01"} Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.460055 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.460988 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.461044 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.461062 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.461102 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.461119 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.461128 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:35 crc kubenswrapper[4892]: I0122 09:10:35.977062 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.164940 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.165173 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.166663 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.166712 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.166729 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.362690 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:21:25.775963316 +0000 UTC Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.462788 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.462905 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.463941 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.464002 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.464028 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.464206 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.464264 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.464310 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.629769 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.630008 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.631792 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.631870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:36 crc kubenswrapper[4892]: I0122 09:10:36.631899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.029164 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.363083 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:40:23.932602834 +0000 UTC Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.487514 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.487817 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.490225 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.490333 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.490355 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.498264 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.570331 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.570745 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.572627 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.572697 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.572717 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:37 crc kubenswrapper[4892]: I0122 09:10:37.661425 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.353564 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.363604 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:47:51.330008879 +0000 UTC Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.468648 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.468721 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.469953 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.469989 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.470003 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.470131 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.470170 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:38 crc kubenswrapper[4892]: I0122 09:10:38.470190 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:39 crc kubenswrapper[4892]: I0122 09:10:39.302924 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:39 crc kubenswrapper[4892]: I0122 09:10:39.364350 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:46:30.493912982 +0000 UTC Jan 22 09:10:39 crc kubenswrapper[4892]: I0122 09:10:39.470893 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:39 crc kubenswrapper[4892]: I0122 09:10:39.472209 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:39 crc kubenswrapper[4892]: I0122 09:10:39.472350 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:39 crc kubenswrapper[4892]: I0122 09:10:39.472381 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:40 crc kubenswrapper[4892]: I0122 09:10:40.364447 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:46:50.868634117 +0000 UTC Jan 22 09:10:40 crc kubenswrapper[4892]: I0122 09:10:40.474495 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:40 crc kubenswrapper[4892]: I0122 09:10:40.476833 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:40 crc kubenswrapper[4892]: I0122 09:10:40.476888 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:40 crc kubenswrapper[4892]: I0122 09:10:40.476902 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:41 crc kubenswrapper[4892]: I0122 09:10:41.354494 4892 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:10:41 crc kubenswrapper[4892]: I0122 09:10:41.354568 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:10:41 crc kubenswrapper[4892]: I0122 09:10:41.365375 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:16:30.259928105 +0000 UTC Jan 22 09:10:41 crc kubenswrapper[4892]: E0122 09:10:41.479499 4892 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:10:42 crc kubenswrapper[4892]: I0122 09:10:42.366089 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:21:39.990766122 +0000 UTC Jan 22 09:10:43 crc kubenswrapper[4892]: I0122 09:10:43.366622 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:14:07.36911273 +0000 UTC Jan 22 09:10:43 crc kubenswrapper[4892]: I0122 09:10:43.931347 4892 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:10:43 crc kubenswrapper[4892]: I0122 09:10:43.931445 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:10:44 crc kubenswrapper[4892]: I0122 09:10:44.354790 4892 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 09:10:44 crc kubenswrapper[4892]: E0122 09:10:44.366383 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 22 09:10:44 crc kubenswrapper[4892]: I0122 09:10:44.367484 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:32:18.012451213 +0000 UTC Jan 22 09:10:44 crc kubenswrapper[4892]: W0122 09:10:44.427163 4892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 22 09:10:44 crc kubenswrapper[4892]: I0122 09:10:44.427249 4892 trace.go:236] Trace[1691436149]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:10:34.425) (total time: 10001ms): Jan 22 09:10:44 crc kubenswrapper[4892]: Trace[1691436149]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:10:44.427) Jan 22 09:10:44 crc kubenswrapper[4892]: Trace[1691436149]: [10.001426688s] [10.001426688s] END Jan 22 09:10:44 crc kubenswrapper[4892]: E0122 09:10:44.427270 4892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 09:10:44 crc kubenswrapper[4892]: I0122 09:10:44.453098 4892 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:10:44 crc kubenswrapper[4892]: I0122 09:10:44.453183 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 09:10:45 crc kubenswrapper[4892]: I0122 09:10:45.367960 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:15:01.634690378 +0000 UTC Jan 22 09:10:46 crc kubenswrapper[4892]: I0122 09:10:46.369019 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:55:15.758200179 +0000 UTC Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.369841 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:47:40.007974652 +0000 UTC Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.599143 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.599401 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.605445 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.605495 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.605510 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:47 crc kubenswrapper[4892]: I0122 09:10:47.617731 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.371053 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:25:25.384312719 +0000 UTC Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.494864 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.496267 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.496334 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.496347 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.937118 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.937425 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.938816 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.938884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.938907 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:48 crc kubenswrapper[4892]: I0122 09:10:48.942204 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.310641 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.310884 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.312389 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.312436 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.312451 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.371617 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:17:53.947925997 +0000 UTC Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.466038 4892 trace.go:236] Trace[1649444892]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:10:34.602) (total time: 14863ms): Jan 22 09:10:49 crc kubenswrapper[4892]: Trace[1649444892]: ---"Objects listed" error: 14863ms (09:10:49.465) Jan 22 09:10:49 crc kubenswrapper[4892]: Trace[1649444892]: [14.863790565s] [14.863790565s] END Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.466096 4892 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.466657 4892 trace.go:236] Trace[1571583494]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:10:35.399) (total time: 14066ms): Jan 22 09:10:49 crc kubenswrapper[4892]: Trace[1571583494]: ---"Objects listed" error: 14066ms (09:10:49.466) Jan 22 09:10:49 crc kubenswrapper[4892]: Trace[1571583494]: [14.066972091s] [14.066972091s] END Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.466697 4892 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.467805 4892 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.479802 4892 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 09:10:49 crc kubenswrapper[4892]: E0122 09:10:49.488141 4892 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.489192 4892 trace.go:236] Trace[1367586246]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:10:35.137) (total time: 14351ms): Jan 22 09:10:49 crc kubenswrapper[4892]: Trace[1367586246]: ---"Objects listed" error: 14350ms (09:10:49.488) Jan 22 09:10:49 crc kubenswrapper[4892]: Trace[1367586246]: [14.351127004s] [14.351127004s] END Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.489250 4892 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.497184 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.557702 4892 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60760->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.557761 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60760->192.168.126.11:17697: read: connection reset by peer" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.558178 4892 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.558225 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.622932 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.626570 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:49 crc kubenswrapper[4892]: I0122 09:10:49.881928 4892 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.357987 4892 apiserver.go:52] "Watching apiserver" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.359850 4892 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.360062 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.360537 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.360600 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.360709 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.360728 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.360883 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.360986 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.361028 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.360999 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.361232 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.363233 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364045 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364062 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364101 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364129 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364146 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364054 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.364166 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.368311 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.371910 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:59:58.746621318 +0000 UTC Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.374957 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375000 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375077 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375102 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375117 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375154 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375172 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375189 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.375209 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.376081 4892 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.376396 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.378229 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.386333 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.390571 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.391857 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.396648 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.396676 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.396693 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.396762 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:50.896742385 +0000 UTC m=+20.740821448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.401325 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.401361 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.401377 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.401441 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:50.901420336 +0000 UTC m=+20.745499459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.404339 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.405852 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.408782 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.425493 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.437822 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.456247 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.461599 4892 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.470072 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476376 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476429 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476456 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476479 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476505 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476527 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476549 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476575 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476599 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476623 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476647 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476699 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476727 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476751 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476779 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476808 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476831 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476857 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476880 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476903 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476929 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476965 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.476989 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477016 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477040 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477066 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477091 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477117 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477142 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477165 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477190 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477217 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477242 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477268 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477311 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477336 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477360 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477383 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477405 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477433 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477462 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477483 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477506 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477528 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477572 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477593 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477613 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477632 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477654 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477678 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477699 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477721 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477744 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477765 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477789 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477815 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477839 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477864 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477888 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477913 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477935 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477959 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.477982 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478004 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478026 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478048 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478074 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478098 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478118 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478141 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478163 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478187 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478208 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478232 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478255 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478277 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478317 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478340 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478363 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478398 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478421 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478445 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478466 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478488 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478510 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478533 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478554 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478583 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478607 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478629 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478651 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478673 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478697 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478721 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478743 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478766 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478791 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478812 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478835 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478857 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478880 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478904 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478926 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478951 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478975 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.478999 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479022 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479044 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479067 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479091 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479113 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479137 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479161 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479185 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479209 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479232 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479254 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479275 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479324 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479350 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479375 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479400 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479422 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479446 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479469 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479493 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479515 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479537 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479560 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479583 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479605 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479628 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479654 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479676 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479705 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479727 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479751 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479773 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479797 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479822 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479846 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479871 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479895 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479919 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479942 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479964 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.479988 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480011 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480034 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480058 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480082 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480107 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480132 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480154 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480180 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480205 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480228 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480250 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480276 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480352 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480381 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480404 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480559 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480584 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480607 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480633 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480657 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480685 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480714 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480737 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480763 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480791 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480816 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480839 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480864 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480888 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480914 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480938 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480963 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.480988 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481011 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481035 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481070 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481097 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481121 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481145 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481169 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481194 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481220 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481244 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481269 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481315 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481340 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481364 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481390 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481473 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481560 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481629 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481673 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481729 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.481778 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.481953 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.482060 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:50.982043263 +0000 UTC m=+20.826122326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.482709 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.483159 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.483347 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.483529 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:10:50.98351013 +0000 UTC m=+20.827589193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.483829 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.483884 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.484134 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.484439 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.484672 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.485055 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.485485 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.486368 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.486873 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.487237 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.487489 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.487716 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.488232 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.489016 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.489907 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.490475 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.490782 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.490762 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.490800 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.491034 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.491371 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.491932 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.491925 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.492323 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.492343 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.492485 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.492638 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.492678 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.492959 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.493131 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.493628 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.493673 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.494021 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.494033 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.494043 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.494204 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.494433 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.494770 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495087 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495187 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495375 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495431 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495490 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495741 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.495899 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496022 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496250 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496443 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496271 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496691 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496713 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.496910 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.497125 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.497193 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.497544 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.497916 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498018 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498183 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498452 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498222 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498742 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498797 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498838 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.498892 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.499423 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.499572 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.502016 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.502725 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.503463 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.503757 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.513732 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.516930 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.517032 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:51.016990702 +0000 UTC m=+20.861069765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.521525 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.521663 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.530716 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.530923 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.531020 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.531040 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.533153 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.534333 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.534705 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.535124 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.535349 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.536423 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.540812 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.541083 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.541107 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.541401 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.541550 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.541732 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.541787 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.542251 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.542677 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.542826 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.543070 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544374 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544379 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.545067 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544403 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544494 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544506 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544513 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544623 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.544980 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.545193 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.545357 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.545531 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.546137 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.546390 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907" exitCode=255 Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.546974 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547023 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547167 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547233 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547321 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547373 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547410 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547454 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907"} Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547426 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547667 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547698 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.547978 4892 scope.go:117] "RemoveContainer" containerID="958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.548811 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.548841 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.548927 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.549342 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.549703 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.550287 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.550547 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.550908 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551311 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551347 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551241 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551489 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551490 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551580 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.551920 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.552113 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.552153 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.552426 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.552483 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.553079 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.553130 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.553262 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.553172 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.553446 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.553692 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554053 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554141 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554216 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554530 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554175 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554743 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554855 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554873 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554884 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.554266 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.555231 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.555263 4892 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.555476 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.557069 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.557607 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.557775 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.557837 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.557903 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.558194 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.558643 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.558938 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.559013 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.559868 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.560349 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.560369 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.560450 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.560564 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.560754 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.560892 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.561321 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.562443 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.562778 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.562794 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.562920 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563054 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563621 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563676 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563680 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563852 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563855 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.563941 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.564167 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.564366 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.565506 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.565991 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566005 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566009 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566080 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566102 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566144 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566192 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566201 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.566564 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.567067 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.572133 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.575950 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.581369 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583155 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583331 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583344 4892 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583395 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583407 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583418 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583429 4892 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583442 4892 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583452 4892 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583464 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583474 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583484 4892 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583498 4892 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583511 4892 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583520 4892 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583530 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583541 4892 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583551 4892 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583561 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583571 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583579 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583590 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583600 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583609 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583620 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583631 4892 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583642 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583652 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583662 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583671 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583682 4892 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583691 4892 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583701 4892 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583711 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583720 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583728 4892 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583737 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583747 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583756 4892 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583765 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583774 4892 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583782 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583790 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583800 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583808 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583819 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583828 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583837 4892 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583846 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583855 4892 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583864 4892 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583873 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583886 4892 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583895 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583904 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583912 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583920 4892 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583929 4892 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583938 4892 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583947 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583956 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583964 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583974 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583983 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.583992 4892 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584002 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584011 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584021 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584030 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584038 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584048 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584056 4892 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584065 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584073 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584081 4892 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584090 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584099 4892 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584107 4892 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584115 4892 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584125 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584133 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584141 4892 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584149 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584156 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584164 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584173 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584181 4892 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584190 4892 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584199 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584208 4892 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584217 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584226 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584234 4892 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584242 4892 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584251 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584259 4892 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584267 4892 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584276 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584287 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584347 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584357 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584365 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584372 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584381 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584389 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584397 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584405 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584414 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584422 4892 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584430 4892 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584438 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584446 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584455 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584463 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584471 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584479 4892 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584487 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584496 4892 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584504 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584512 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584520 4892 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584528 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584536 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584545 4892 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584554 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584563 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584571 4892 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584579 4892 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584589 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584597 4892 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584605 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584613 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584621 4892 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584631 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584639 4892 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584648 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584656 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584665 4892 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584674 4892 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584682 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584690 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584698 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584706 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584714 4892 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584722 4892 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584730 4892 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584738 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584746 4892 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584754 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584763 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584771 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584779 4892 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584788 4892 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584796 4892 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584804 4892 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584812 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584820 4892 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584829 4892 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584837 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584845 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584854 4892 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584861 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584869 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584877 4892 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584885 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584893 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584902 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584911 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584919 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584928 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584936 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584945 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584954 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584962 4892 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584970 4892 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584979 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584987 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.584995 4892 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585003 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585011 4892 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585019 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585028 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585036 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585044 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585053 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585061 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585070 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585077 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585086 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585094 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585104 4892 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585114 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585125 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.585135 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.586024 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.589716 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.599718 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.603342 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.614517 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.627078 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.639174 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.651525 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.662444 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.672877 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.673936 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.682229 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.682968 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.686193 4892 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.686225 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:50 crc kubenswrapper[4892]: W0122 09:10:50.699619 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ad3994cd236e65f77e48121d973d83bc30e1999a362a2ec9460ceb790284c35f WatchSource:0}: Error finding container ad3994cd236e65f77e48121d973d83bc30e1999a362a2ec9460ceb790284c35f: Status 404 returned error can't find the container with id ad3994cd236e65f77e48121d973d83bc30e1999a362a2ec9460ceb790284c35f Jan 22 09:10:50 crc kubenswrapper[4892]: W0122 09:10:50.700058 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-ca8ffaeeb412bb5ef17413fd2f4d057459e3aeda80765f2bddc127153fb21f8a WatchSource:0}: Error finding container ca8ffaeeb412bb5ef17413fd2f4d057459e3aeda80765f2bddc127153fb21f8a: Status 404 returned error can't find the container with id ca8ffaeeb412bb5ef17413fd2f4d057459e3aeda80765f2bddc127153fb21f8a Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.712070 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:10:50 crc kubenswrapper[4892]: W0122 09:10:50.730232 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5c4f8814ccf9f4f63ea887851d3581c8ae30a6097373ff33247115f060ed2360 WatchSource:0}: Error finding container 5c4f8814ccf9f4f63ea887851d3581c8ae30a6097373ff33247115f060ed2360: Status 404 returned error can't find the container with id 5c4f8814ccf9f4f63ea887851d3581c8ae30a6097373ff33247115f060ed2360 Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.988597 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.988720 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.988759 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:50 crc kubenswrapper[4892]: I0122 09:10:50.988795 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.988881 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:10:51.988856887 +0000 UTC m=+21.832935980 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.988964 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.988996 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989011 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989028 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989068 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:51.989048932 +0000 UTC m=+21.833127985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989113 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:51.989091543 +0000 UTC m=+21.833170636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989226 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989247 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989266 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:50 crc kubenswrapper[4892]: E0122 09:10:50.989351 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:51.98933645 +0000 UTC m=+21.833415543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.089412 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:51 crc kubenswrapper[4892]: E0122 09:10:51.089488 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:51 crc kubenswrapper[4892]: E0122 09:10:51.089541 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:52.089526699 +0000 UTC m=+21.933605762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.344797 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.372592 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:50:07.981587517 +0000 UTC Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.424144 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.424991 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.425780 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.426583 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.427266 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.427912 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.428586 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.429141 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.429745 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.430350 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.430831 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.431521 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.432049 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.434020 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.435536 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.436891 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.438880 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.439044 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.439835 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.441066 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.442773 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.443312 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.443958 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.444430 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.445144 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.445621 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.446341 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.446962 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.447556 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.448219 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.448772 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.449323 4892 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.449447 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.450829 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.453337 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.453963 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.455207 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.455555 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.456412 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.456949 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.457661 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.458501 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.459072 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.459848 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.460661 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.461413 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.461976 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.462603 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.463265 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.465591 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.466364 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.467000 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.467675 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.468509 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.469482 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.470140 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.479180 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.506771 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.522112 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.534027 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.546304 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.550417 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.550456 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ad3994cd236e65f77e48121d973d83bc30e1999a362a2ec9460ceb790284c35f"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.552442 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.554047 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.558131 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c4f8814ccf9f4f63ea887851d3581c8ae30a6097373ff33247115f060ed2360"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.560437 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.560482 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.560495 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ca8ffaeeb412bb5ef17413fd2f4d057459e3aeda80765f2bddc127153fb21f8a"} Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.564108 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.578527 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.591266 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.606566 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.622027 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.635557 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.650190 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.662355 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.674354 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.999370 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.999430 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.999451 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:51 crc kubenswrapper[4892]: I0122 09:10:51.999469 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:51 crc kubenswrapper[4892]: E0122 09:10:51.999581 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:51 crc kubenswrapper[4892]: E0122 09:10:51.999603 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:51 crc kubenswrapper[4892]: E0122 09:10:51.999616 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:51 crc kubenswrapper[4892]: E0122 09:10:51.999665 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:53.999649207 +0000 UTC m=+23.843728270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:51.999989 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:10:53.999979075 +0000 UTC m=+23.844058138 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.000047 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.000063 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.000073 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.000102 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:54.000092758 +0000 UTC m=+23.844171831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.000150 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.000229 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:54.00016905 +0000 UTC m=+23.844248123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.100622 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.100738 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.100788 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:54.100775441 +0000 UTC m=+23.944854504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.373777 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:00:08.608332678 +0000 UTC Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.417916 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.417937 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.418135 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.417956 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.418184 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.418305 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.563110 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.688640 4892 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.690933 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.690983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.690995 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.691062 4892 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.701184 4892 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.701331 4892 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.702651 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.702703 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.702721 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.702744 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.702761 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.726378 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.730386 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.730426 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.730435 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.730451 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.730461 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.742616 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.746347 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.746377 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.746386 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.746400 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.746410 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.761245 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.768275 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.768358 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.768368 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.768383 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.768410 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.782936 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.786598 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.786633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.786641 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.786655 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.786665 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.799721 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:52 crc kubenswrapper[4892]: E0122 09:10:52.799829 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.801197 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.801239 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.801249 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.801265 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.801277 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.903558 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.903618 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.903628 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.903642 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:52 crc kubenswrapper[4892]: I0122 09:10:52.903652 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:52Z","lastTransitionTime":"2026-01-22T09:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.006192 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.006254 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.006272 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.006331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.006406 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.109468 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.109531 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.109550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.109577 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.109595 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.212267 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.212383 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.212408 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.212440 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.212462 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.315332 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.315377 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.315389 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.315407 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.315419 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.374374 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:40:10.437540423 +0000 UTC Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.417828 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.417870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.417881 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.417894 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.417904 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.520406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.520444 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.520456 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.520472 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.520485 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.566524 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.581386 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.594060 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.607096 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.618772 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.622306 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.622352 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.622363 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.622379 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.622390 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.633784 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.646689 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.665703 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.682880 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.724992 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.725046 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.725054 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.725068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.725077 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.827824 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.827854 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.827864 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.827876 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.827886 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.930015 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.930075 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.930093 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.930112 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:53 crc kubenswrapper[4892]: I0122 09:10:53.930124 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:53Z","lastTransitionTime":"2026-01-22T09:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.015216 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.015319 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.015347 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.015368 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015496 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015515 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015526 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015560 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:10:58.015513114 +0000 UTC m=+27.859592207 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015625 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:58.015606457 +0000 UTC m=+27.859685560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015562 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015642 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015822 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:58.015778701 +0000 UTC m=+27.859857774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015828 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015849 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.015912 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:58.015894374 +0000 UTC m=+27.859973437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.032413 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.032459 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.032468 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.032480 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.032490 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.116155 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.116362 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.116461 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:10:58.116438423 +0000 UTC m=+27.960517566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.135042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.135082 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.135093 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.135109 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.135120 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.237607 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.237635 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.237643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.237655 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.237663 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.339598 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.339657 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.339674 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.339698 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.339714 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.375301 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:05:14.143681033 +0000 UTC Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.417938 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.417988 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.418116 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.418198 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.418322 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:10:54 crc kubenswrapper[4892]: E0122 09:10:54.418475 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.441342 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.441394 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.441409 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.441429 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.441445 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.544133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.544166 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.544174 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.544187 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.544196 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.646983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.647031 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.647042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.647084 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.647098 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.748588 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.748634 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.748646 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.748662 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.748696 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.850745 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.850779 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.850792 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.850809 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.850820 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.953133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.953172 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.953180 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.953194 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:54 crc kubenswrapper[4892]: I0122 09:10:54.953203 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:54Z","lastTransitionTime":"2026-01-22T09:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.055367 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.055439 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.055455 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.055476 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.055488 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.132884 4892 csr.go:261] certificate signing request csr-bpdbq is approved, waiting to be issued Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.157576 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.157603 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.157612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.157624 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.157633 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.162415 4892 csr.go:257] certificate signing request csr-bpdbq is issued Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.259889 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.259929 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.259938 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.259951 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.259960 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.362972 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.363020 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.363033 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.363049 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.363061 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.376353 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:37:32.423858356 +0000 UTC Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.466633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.466687 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.466697 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.466718 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.466730 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.569089 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.569142 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.569156 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.569177 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.569194 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.615677 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hz9vn"] Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.616073 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-m8b6t"] Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.616310 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.616320 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.618385 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.618462 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.618491 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.618559 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.618592 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.618911 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.619002 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.619218 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.637439 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.653054 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.665086 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.671612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.671645 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.671663 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.671687 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.671699 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.676864 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.690780 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.705842 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.722311 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728217 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-kubelet\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728272 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-conf-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728326 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-os-release\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728351 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-etc-kubernetes\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728374 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-hostroot\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728413 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-system-cni-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728437 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/78b0eb1d-db89-4f40-8f34-b35abed54117-hosts-file\") pod \"node-resolver-m8b6t\" (UID: \"78b0eb1d-db89-4f40-8f34-b35abed54117\") " pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728461 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-socket-dir-parent\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728490 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-daemon-config\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728515 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcc9c\" (UniqueName: \"kubernetes.io/projected/78b0eb1d-db89-4f40-8f34-b35abed54117-kube-api-access-dcc9c\") pod \"node-resolver-m8b6t\" (UID: \"78b0eb1d-db89-4f40-8f34-b35abed54117\") " pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728537 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-cni-multus\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728559 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9d9g\" (UniqueName: \"kubernetes.io/projected/80ef00cc-97bb-4f08-ba72-3947ab29043f-kube-api-access-f9d9g\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728586 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-netns\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728613 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-cni-bin\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728657 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-cni-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728676 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-cnibin\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728695 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/80ef00cc-97bb-4f08-ba72-3947ab29043f-cni-binary-copy\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728726 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-k8s-cni-cncf-io\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.728817 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-multus-certs\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.735176 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.749730 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.769328 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.774605 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.774656 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.774666 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.774687 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.774708 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.809322 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829619 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829841 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-system-cni-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829899 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-socket-dir-parent\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829918 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-daemon-config\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829940 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/78b0eb1d-db89-4f40-8f34-b35abed54117-hosts-file\") pod \"node-resolver-m8b6t\" (UID: \"78b0eb1d-db89-4f40-8f34-b35abed54117\") " pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829960 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9d9g\" (UniqueName: \"kubernetes.io/projected/80ef00cc-97bb-4f08-ba72-3947ab29043f-kube-api-access-f9d9g\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829977 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcc9c\" (UniqueName: \"kubernetes.io/projected/78b0eb1d-db89-4f40-8f34-b35abed54117-kube-api-access-dcc9c\") pod \"node-resolver-m8b6t\" (UID: \"78b0eb1d-db89-4f40-8f34-b35abed54117\") " pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.829994 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-cni-multus\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830005 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-system-cni-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830012 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-netns\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830066 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-netns\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830072 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-cni-bin\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830092 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-cnibin\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830112 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/80ef00cc-97bb-4f08-ba72-3947ab29043f-cni-binary-copy\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830129 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-k8s-cni-cncf-io\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830137 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-socket-dir-parent\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830184 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-cni-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830204 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-multus-certs\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830236 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-kubelet\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830255 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-conf-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830277 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-os-release\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830336 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-hostroot\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830356 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-etc-kubernetes\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830447 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/78b0eb1d-db89-4f40-8f34-b35abed54117-hosts-file\") pod \"node-resolver-m8b6t\" (UID: \"78b0eb1d-db89-4f40-8f34-b35abed54117\") " pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830790 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-cni-multus\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830966 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-cni-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830975 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-daemon-config\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.830995 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-cni-bin\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831036 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-cnibin\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-multus-conf-dir\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831117 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-multus-certs\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831156 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-var-lib-kubelet\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831200 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-host-run-k8s-cni-cncf-io\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831273 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-os-release\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831331 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-hostroot\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831448 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80ef00cc-97bb-4f08-ba72-3947ab29043f-etc-kubernetes\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.831582 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/80ef00cc-97bb-4f08-ba72-3947ab29043f-cni-binary-copy\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.863973 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9d9g\" (UniqueName: \"kubernetes.io/projected/80ef00cc-97bb-4f08-ba72-3947ab29043f-kube-api-access-f9d9g\") pod \"multus-hz9vn\" (UID: \"80ef00cc-97bb-4f08-ba72-3947ab29043f\") " pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.868691 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.872133 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcc9c\" (UniqueName: \"kubernetes.io/projected/78b0eb1d-db89-4f40-8f34-b35abed54117-kube-api-access-dcc9c\") pod \"node-resolver-m8b6t\" (UID: \"78b0eb1d-db89-4f40-8f34-b35abed54117\") " pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.877869 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.877921 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.877933 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.877951 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.877967 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.898218 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.915055 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.941856 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hz9vn" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.943924 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.947715 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8b6t" Jan 22 09:10:55 crc kubenswrapper[4892]: W0122 09:10:55.960936 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78b0eb1d_db89_4f40_8f34_b35abed54117.slice/crio-491b1bb443157ea204c119d75dccc74db8a44fab88430bb800bc951c9be8ba80 WatchSource:0}: Error finding container 491b1bb443157ea204c119d75dccc74db8a44fab88430bb800bc951c9be8ba80: Status 404 returned error can't find the container with id 491b1bb443157ea204c119d75dccc74db8a44fab88430bb800bc951c9be8ba80 Jan 22 09:10:55 crc kubenswrapper[4892]: W0122 09:10:55.971874 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80ef00cc_97bb_4f08_ba72_3947ab29043f.slice/crio-738e460be55a3af5c3a9ccaac66c37b1a5a0321d4daea4c1858588ddb5780773 WatchSource:0}: Error finding container 738e460be55a3af5c3a9ccaac66c37b1a5a0321d4daea4c1858588ddb5780773: Status 404 returned error can't find the container with id 738e460be55a3af5c3a9ccaac66c37b1a5a0321d4daea4c1858588ddb5780773 Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.978208 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:55Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.980296 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.980352 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.980363 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.980378 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:55 crc kubenswrapper[4892]: I0122 09:10:55.980391 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:55Z","lastTransitionTime":"2026-01-22T09:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.005702 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.012869 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-w87tf"] Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.013434 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.017085 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-whb2h"] Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.017868 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7rbdp"] Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.018117 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.018598 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026269 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026297 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026397 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026415 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026466 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026651 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026659 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026673 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026690 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026699 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026786 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026847 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026848 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.026944 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.036857 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.049503 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.061309 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.082183 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.083528 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.083550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.083559 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.083573 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.083586 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.095785 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.112115 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.124920 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136601 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-etc-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136637 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-ovn-kubernetes\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136658 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-os-release\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136673 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-systemd\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136692 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8xcw\" (UniqueName: \"kubernetes.io/projected/4765e554-3060-4876-90fe-5e054619d7a1-kube-api-access-k8xcw\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136709 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/afe12181-a266-4b88-b591-e1c130d15254-cni-binary-copy\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136727 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-netns\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136700 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136742 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-node-log\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136891 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136911 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/afe12181-a266-4b88-b591-e1c130d15254-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136927 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-systemd-units\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136951 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-ovn\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136968 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-bin\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.136988 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4765e554-3060-4876-90fe-5e054619d7a1-proxy-tls\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137006 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlq68\" (UniqueName: \"kubernetes.io/projected/afe12181-a266-4b88-b591-e1c130d15254-kube-api-access-tlq68\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137029 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-kubelet\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137044 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137061 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-config\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137086 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4765e554-3060-4876-90fe-5e054619d7a1-rootfs\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137107 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-cnibin\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137160 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-log-socket\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137196 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-env-overrides\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137216 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-script-lib\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137239 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvw6q\" (UniqueName: \"kubernetes.io/projected/a93623e9-3eab-47bb-b94a-5b962f3eb203-kube-api-access-cvw6q\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137265 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-system-cni-dir\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137303 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-netd\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137325 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4765e554-3060-4876-90fe-5e054619d7a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137342 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137373 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-slash\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137396 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovn-node-metrics-cert\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.137417 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-var-lib-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.146260 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.158052 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.163518 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 09:05:55 +0000 UTC, rotation deadline is 2026-11-12 18:41:59.286687044 +0000 UTC Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.163548 4892 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7065h31m3.123141089s for next certificate rotation Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.170348 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.181049 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.189875 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.189918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.189928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.189944 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.189953 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.196043 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.210949 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.234517 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238783 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-var-lib-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238824 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-etc-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238839 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-ovn-kubernetes\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238856 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-os-release\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238871 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-systemd\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238887 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8xcw\" (UniqueName: \"kubernetes.io/projected/4765e554-3060-4876-90fe-5e054619d7a1-kube-api-access-k8xcw\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238907 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/afe12181-a266-4b88-b591-e1c130d15254-cni-binary-copy\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238922 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-netns\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238937 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-node-log\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238953 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/afe12181-a266-4b88-b591-e1c130d15254-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238984 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.238998 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-systemd-units\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239022 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-ovn\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239003 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-etc-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239057 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-systemd\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239080 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-bin\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239024 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-os-release\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239059 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-node-log\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239104 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-systemd-units\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239114 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-netns\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239043 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-bin\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239114 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239164 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-ovn\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239017 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-ovn-kubernetes\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239207 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4765e554-3060-4876-90fe-5e054619d7a1-proxy-tls\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239213 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-var-lib-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239306 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlq68\" (UniqueName: \"kubernetes.io/projected/afe12181-a266-4b88-b591-e1c130d15254-kube-api-access-tlq68\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239444 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-kubelet\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239494 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239518 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-config\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239540 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-kubelet\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239566 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-openvswitch\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239674 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4765e554-3060-4876-90fe-5e054619d7a1-rootfs\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239707 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4765e554-3060-4876-90fe-5e054619d7a1-rootfs\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239730 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-cnibin\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239749 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-log-socket\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239772 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-env-overrides\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239776 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/afe12181-a266-4b88-b591-e1c130d15254-cni-binary-copy\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239798 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-script-lib\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239797 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-cnibin\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239833 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-log-socket\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239859 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/afe12181-a266-4b88-b591-e1c130d15254-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.239931 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvw6q\" (UniqueName: \"kubernetes.io/projected/a93623e9-3eab-47bb-b94a-5b962f3eb203-kube-api-access-cvw6q\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240016 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-system-cni-dir\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240035 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-netd\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240053 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4765e554-3060-4876-90fe-5e054619d7a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240069 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240087 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-slash\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240105 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovn-node-metrics-cert\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240239 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-env-overrides\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240084 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/afe12181-a266-4b88-b591-e1c130d15254-system-cni-dir\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240262 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-slash\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240315 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-netd\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240322 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240395 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-config\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240665 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-script-lib\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.240736 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4765e554-3060-4876-90fe-5e054619d7a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.244327 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4765e554-3060-4876-90fe-5e054619d7a1-proxy-tls\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.246929 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovn-node-metrics-cert\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.258834 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlq68\" (UniqueName: \"kubernetes.io/projected/afe12181-a266-4b88-b591-e1c130d15254-kube-api-access-tlq68\") pod \"multus-additional-cni-plugins-7rbdp\" (UID: \"afe12181-a266-4b88-b591-e1c130d15254\") " pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.258934 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvw6q\" (UniqueName: \"kubernetes.io/projected/a93623e9-3eab-47bb-b94a-5b962f3eb203-kube-api-access-cvw6q\") pod \"ovnkube-node-whb2h\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.262681 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8xcw\" (UniqueName: \"kubernetes.io/projected/4765e554-3060-4876-90fe-5e054619d7a1-kube-api-access-k8xcw\") pod \"machine-config-daemon-w87tf\" (UID: \"4765e554-3060-4876-90fe-5e054619d7a1\") " pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.291951 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.291988 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.291997 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.292010 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.292022 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.329927 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.336439 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:10:56 crc kubenswrapper[4892]: W0122 09:10:56.347270 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93623e9_3eab_47bb_b94a_5b962f3eb203.slice/crio-5cb0868bf5953d4b5ac00c2b59132114cba630d48d879e15bcd646c0821dd213 WatchSource:0}: Error finding container 5cb0868bf5953d4b5ac00c2b59132114cba630d48d879e15bcd646c0821dd213: Status 404 returned error can't find the container with id 5cb0868bf5953d4b5ac00c2b59132114cba630d48d879e15bcd646c0821dd213 Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.350108 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" Jan 22 09:10:56 crc kubenswrapper[4892]: W0122 09:10:56.367890 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe12181_a266_4b88_b591_e1c130d15254.slice/crio-46cfc3dc5975de8c39506ca1b2070bfde0fed59eaee3a12191d496f2cd5b2ca8 WatchSource:0}: Error finding container 46cfc3dc5975de8c39506ca1b2070bfde0fed59eaee3a12191d496f2cd5b2ca8: Status 404 returned error can't find the container with id 46cfc3dc5975de8c39506ca1b2070bfde0fed59eaee3a12191d496f2cd5b2ca8 Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.376503 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:35:52.596363136 +0000 UTC Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.395152 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.395746 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.395846 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.395953 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.396041 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.417850 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.417893 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:56 crc kubenswrapper[4892]: E0122 09:10:56.418331 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:10:56 crc kubenswrapper[4892]: E0122 09:10:56.418117 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.417927 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:56 crc kubenswrapper[4892]: E0122 09:10:56.418460 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.499455 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.499489 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.499497 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.499512 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.499522 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.577846 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" exitCode=0 Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.577959 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.578043 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"5cb0868bf5953d4b5ac00c2b59132114cba630d48d879e15bcd646c0821dd213"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.580883 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8b6t" event={"ID":"78b0eb1d-db89-4f40-8f34-b35abed54117","Type":"ContainerStarted","Data":"3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.580939 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8b6t" event={"ID":"78b0eb1d-db89-4f40-8f34-b35abed54117","Type":"ContainerStarted","Data":"491b1bb443157ea204c119d75dccc74db8a44fab88430bb800bc951c9be8ba80"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.584098 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerStarted","Data":"46cfc3dc5975de8c39506ca1b2070bfde0fed59eaee3a12191d496f2cd5b2ca8"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.586377 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.586409 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"4bf07651957ac3350f1ba7d9a4478e7e57b08b82cb8766dea813f6acccfc25b7"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.589296 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerStarted","Data":"e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.589345 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerStarted","Data":"738e460be55a3af5c3a9ccaac66c37b1a5a0321d4daea4c1858588ddb5780773"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.594281 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.604357 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.604404 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.604413 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.604431 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.604441 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.609550 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.627647 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.643100 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.654368 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.702687 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.708663 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.708707 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.708721 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.708740 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.708755 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.729721 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.757711 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.784184 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.800371 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.811486 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.811526 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.811536 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.811553 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.811564 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.816240 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.832404 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.845067 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.858890 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.873069 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.890046 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.908459 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.914698 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.914736 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.914751 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.914771 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.914787 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:56Z","lastTransitionTime":"2026-01-22T09:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.921949 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.935853 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.948195 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.961590 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.975934 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.987593 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:56 crc kubenswrapper[4892]: I0122 09:10:56.999466 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.016954 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.017011 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.017025 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.017042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.017055 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.017571 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.028522 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.119643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.119668 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.119675 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.119689 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.119697 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.221958 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.222011 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.222024 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.222044 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.222058 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.324382 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.324429 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.324443 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.324462 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.324475 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.377014 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:09:36.305896531 +0000 UTC Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.426985 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.427018 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.427026 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.427039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.427052 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.502547 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gqbrf"] Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.502838 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.506038 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.506531 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.506701 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.506845 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.523965 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.529069 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.529100 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.529109 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.529125 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.529134 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.537170 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.546695 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.552322 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f2782a4-367a-4690-911a-06ca51331fe6-host\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.552359 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f2782a4-367a-4690-911a-06ca51331fe6-serviceca\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.552378 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhzbl\" (UniqueName: \"kubernetes.io/projected/0f2782a4-367a-4690-911a-06ca51331fe6-kube-api-access-qhzbl\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.558100 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.572855 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.587632 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.599032 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.599096 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.599113 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.599127 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.599141 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.600605 4892 generic.go:334] "Generic (PLEG): container finished" podID="afe12181-a266-4b88-b591-e1c130d15254" containerID="57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf" exitCode=0 Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.600650 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerDied","Data":"57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.601047 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.603043 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.617043 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.629610 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.631056 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.631087 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.631097 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.631111 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.631121 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.645478 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.653415 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f2782a4-367a-4690-911a-06ca51331fe6-host\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.653465 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f2782a4-367a-4690-911a-06ca51331fe6-serviceca\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.653485 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhzbl\" (UniqueName: \"kubernetes.io/projected/0f2782a4-367a-4690-911a-06ca51331fe6-kube-api-access-qhzbl\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.654028 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f2782a4-367a-4690-911a-06ca51331fe6-host\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.654912 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f2782a4-367a-4690-911a-06ca51331fe6-serviceca\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.665003 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.672942 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhzbl\" (UniqueName: \"kubernetes.io/projected/0f2782a4-367a-4690-911a-06ca51331fe6-kube-api-access-qhzbl\") pod \"node-ca-gqbrf\" (UID: \"0f2782a4-367a-4690-911a-06ca51331fe6\") " pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.678214 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.690416 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.703987 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.716801 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.726693 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.735843 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.735896 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.735908 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.735926 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.735939 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.740429 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.755703 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.767560 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.785022 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.797772 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.810559 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.821256 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.823098 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gqbrf" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.832055 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.843442 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.848795 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.848830 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.848840 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.848855 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.848866 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.851869 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: W0122 09:10:57.859123 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f2782a4_367a_4690_911a_06ca51331fe6.slice/crio-209f98de46060cd5a7eb9eec66c9dfbcceb273409bcfce90f9d066b4ade414d3 WatchSource:0}: Error finding container 209f98de46060cd5a7eb9eec66c9dfbcceb273409bcfce90f9d066b4ade414d3: Status 404 returned error can't find the container with id 209f98de46060cd5a7eb9eec66c9dfbcceb273409bcfce90f9d066b4ade414d3 Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.864592 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.877392 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.952411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.952453 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.952465 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.952481 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:57 crc kubenswrapper[4892]: I0122 09:10:57.952492 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:57Z","lastTransitionTime":"2026-01-22T09:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058071 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058303 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058337 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058348 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058367 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058377 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.058409 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:11:06.058359003 +0000 UTC m=+35.902438066 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.058728 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.058757 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.058795 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.058779 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.059333 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:06.058831665 +0000 UTC m=+35.902910728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.059407 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.059435 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.059552 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.059567 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.059576 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.059610 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:06.059597746 +0000 UTC m=+35.903676809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.059967 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.060018 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:06.060007716 +0000 UTC m=+35.904086779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.160050 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.160142 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.160189 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:06.160176975 +0000 UTC m=+36.004256038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.160713 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.160756 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.160768 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.160785 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.160798 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.262722 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.262763 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.262773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.262788 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.262800 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.365589 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.365626 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.365637 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.365654 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.365666 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.377938 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:02:46.395643458 +0000 UTC Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.418446 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.418486 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.418494 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.418603 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.418704 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:10:58 crc kubenswrapper[4892]: E0122 09:10:58.418776 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.467870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.467913 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.467925 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.467939 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.467951 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.569954 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.569999 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.570012 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.570030 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.570042 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.607829 4892 generic.go:334] "Generic (PLEG): container finished" podID="afe12181-a266-4b88-b591-e1c130d15254" containerID="be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73" exitCode=0 Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.607915 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerDied","Data":"be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.612724 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.614031 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gqbrf" event={"ID":"0f2782a4-367a-4690-911a-06ca51331fe6","Type":"ContainerStarted","Data":"79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.614070 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gqbrf" event={"ID":"0f2782a4-367a-4690-911a-06ca51331fe6","Type":"ContainerStarted","Data":"209f98de46060cd5a7eb9eec66c9dfbcceb273409bcfce90f9d066b4ade414d3"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.619329 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.636406 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.648046 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.659382 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.673242 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.674338 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.674370 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.674381 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.674397 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.674409 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.684219 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.692892 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.707025 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.720779 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.767270 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.781174 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.781221 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.781247 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.781266 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.781278 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.789354 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.801053 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.819665 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.830535 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.841849 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.850821 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.863823 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.874358 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.883621 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.883679 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.883693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.883712 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.883724 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.886810 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.895531 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.905836 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.916582 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.931999 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.947563 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.959342 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.972871 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.985461 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.986234 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.986265 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.986277 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.986315 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.986328 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:58Z","lastTransitionTime":"2026-01-22T09:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:58 crc kubenswrapper[4892]: I0122 09:10:58.996072 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:58Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.088641 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.088687 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.088699 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.088716 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.088728 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.190947 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.190988 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.191000 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.191016 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.191027 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.293334 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.293362 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.293370 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.293382 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.293390 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.378731 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:00:25.303924899 +0000 UTC Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.395644 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.395685 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.395694 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.395706 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.395716 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.502138 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.502188 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.502198 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.502225 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.502239 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.604362 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.604412 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.604424 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.604440 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.604450 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.619454 4892 generic.go:334] "Generic (PLEG): container finished" podID="afe12181-a266-4b88-b591-e1c130d15254" containerID="66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603" exitCode=0 Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.619522 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerDied","Data":"66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.638549 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.655744 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.670756 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.685184 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.695913 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.706442 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.706482 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.706495 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.706511 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.706525 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.712441 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.726742 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.741274 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.755667 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.768067 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.785107 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.796763 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.809014 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.809041 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.809049 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.809062 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.809071 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.819842 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.834768 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:10:59Z is after 2025-08-24T17:21:41Z" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.910895 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.910932 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.910940 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.910953 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:10:59 crc kubenswrapper[4892]: I0122 09:10:59.910961 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:10:59Z","lastTransitionTime":"2026-01-22T09:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.013216 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.013251 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.013260 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.013273 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.013297 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.116626 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.116662 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.116673 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.116689 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.116700 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.219125 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.219163 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.219173 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.219186 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.219195 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.321874 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.321925 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.321942 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.321963 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.321977 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.379560 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:58:11.976446562 +0000 UTC Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.417997 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:00 crc kubenswrapper[4892]: E0122 09:11:00.418097 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.418407 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:00 crc kubenswrapper[4892]: E0122 09:11:00.418461 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.418496 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:00 crc kubenswrapper[4892]: E0122 09:11:00.418533 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.430742 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.430787 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.430798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.430814 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.430825 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.533397 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.533442 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.533455 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.533470 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.533481 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.542922 4892 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.623871 4892 generic.go:334] "Generic (PLEG): container finished" podID="afe12181-a266-4b88-b591-e1c130d15254" containerID="af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50" exitCode=0 Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.623946 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerDied","Data":"af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.628193 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.635695 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.635730 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.635740 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.635754 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.635765 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.640431 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.654315 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.665812 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.680669 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.689705 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.703574 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.715042 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.724224 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.738673 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.738786 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.738806 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.738823 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.738833 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.741036 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.763255 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.776576 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.786877 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.797300 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.807392 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:00Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.841728 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.841769 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.841779 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.841793 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.841803 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.944639 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.944674 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.944684 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.944697 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:00 crc kubenswrapper[4892]: I0122 09:11:00.944706 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:00Z","lastTransitionTime":"2026-01-22T09:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.046710 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.046746 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.046756 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.046772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.046783 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.148595 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.148664 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.148676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.148693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.148724 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.251259 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.251323 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.251335 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.251351 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.251362 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.353566 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.353601 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.353612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.353629 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.353638 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.379785 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:36:14.401661122 +0000 UTC Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.431234 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.442135 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.455342 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.455441 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.455456 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.455475 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.455488 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.458609 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.470467 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.491625 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.512188 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.532575 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.551813 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.557190 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.557224 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.557236 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.557251 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.557262 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.566829 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.576794 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.588937 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.600348 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.611111 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.622694 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.635435 4892 generic.go:334] "Generic (PLEG): container finished" podID="afe12181-a266-4b88-b591-e1c130d15254" containerID="64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47" exitCode=0 Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.635472 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerDied","Data":"64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.652935 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.659123 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.659155 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.659164 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.659176 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.659219 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.667436 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.681173 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.697362 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.711419 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.726226 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.744138 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.754849 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.761501 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.761535 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.761543 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.761557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.761566 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.768999 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.780500 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.792323 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.805466 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.821993 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.834817 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:01Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.863810 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.863857 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.863868 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.863886 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.863900 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.966262 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.966361 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.966382 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.966406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:01 crc kubenswrapper[4892]: I0122 09:11:01.966425 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:01Z","lastTransitionTime":"2026-01-22T09:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.069239 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.069276 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.069301 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.069316 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.069327 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.172757 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.172803 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.172813 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.172834 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.172846 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.276254 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.276351 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.276387 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.276427 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.276451 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.378964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.379007 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.379018 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.379037 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.379052 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.380129 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:59:32.92163691 +0000 UTC Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.417675 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.417691 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.417773 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.417835 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.417966 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.418119 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.481635 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.481677 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.481688 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.481703 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.481715 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.584774 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.584828 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.584841 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.584861 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.584873 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.642662 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerStarted","Data":"11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.647931 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.648327 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.648357 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.664268 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.677337 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.681646 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.682215 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.687398 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.687434 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.687447 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.687462 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.687474 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.701235 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.714898 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.733865 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.744991 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.754997 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.767046 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.786890 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.789520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.789557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.789568 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.789588 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.789598 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.799960 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.808545 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.808581 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.808593 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.808610 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.808623 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.816917 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.822751 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.825729 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.825787 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.825800 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.825817 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.825828 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.834636 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.847021 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.849346 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.852194 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.852230 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.852239 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.852254 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.852265 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.867977 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.874095 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.877571 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.877616 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.877628 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.877646 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.877659 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.886546 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.892994 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.896530 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.896562 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.896576 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.896596 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.896611 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.903112 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.909595 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: E0122 09:11:02.909925 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.911455 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.911497 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.911511 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.911529 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.911542 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:02Z","lastTransitionTime":"2026-01-22T09:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.921964 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.933888 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.947772 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.961321 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.971760 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.983050 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:02 crc kubenswrapper[4892]: I0122 09:11:02.992828 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:02Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.006386 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.013616 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.013660 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.013671 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.013690 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.013702 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.016805 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.026028 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.034139 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.044760 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.116686 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.116732 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.116743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.116760 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.116772 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.219368 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.219427 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.219442 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.219459 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.219470 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.322151 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.322196 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.322208 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.322229 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.322246 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.380832 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:44:03.713343575 +0000 UTC Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.424538 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.424591 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.424604 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.424620 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.424632 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.527895 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.527957 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.527977 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.528001 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.528020 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.630137 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.630196 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.630213 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.630238 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.630256 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.654556 4892 generic.go:334] "Generic (PLEG): container finished" podID="afe12181-a266-4b88-b591-e1c130d15254" containerID="11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8" exitCode=0 Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.654689 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerDied","Data":"11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.654725 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.671409 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.698039 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.718029 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.732599 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.732840 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.732965 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.733093 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.733189 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.735092 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.750773 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.761866 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.790363 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.807403 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.842655 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.842705 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.842716 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.842732 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.842744 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.845141 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.859019 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.873013 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.884681 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.895385 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.908160 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:03Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.945231 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.945273 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.945299 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.945316 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:03 crc kubenswrapper[4892]: I0122 09:11:03.945327 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:03Z","lastTransitionTime":"2026-01-22T09:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.047870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.047920 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.047932 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.047947 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.047958 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.150678 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.150745 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.150764 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.150788 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.150807 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.253571 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.253614 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.253627 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.253645 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.253659 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.356525 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.356564 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.356578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.356595 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.356607 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.381160 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:51:50.945813401 +0000 UTC Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.418682 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.418686 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:04 crc kubenswrapper[4892]: E0122 09:11:04.418873 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:04 crc kubenswrapper[4892]: E0122 09:11:04.419008 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.419371 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:04 crc kubenswrapper[4892]: E0122 09:11:04.419587 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.459029 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.459101 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.459126 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.459158 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.459182 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.562076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.562333 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.562345 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.562358 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.562368 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.661169 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" event={"ID":"afe12181-a266-4b88-b591-e1c130d15254","Type":"ContainerStarted","Data":"4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.661458 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.663780 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.663815 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.663824 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.663839 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.663848 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.676878 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.688764 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.698066 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.708549 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.718575 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.728542 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.740922 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.753617 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.767579 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.768051 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.768143 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.768403 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.768491 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.767715 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.781976 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.791762 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.808354 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.834097 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.852030 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:04Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.871252 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.871317 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.871329 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.871348 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.871360 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.973507 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.973538 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.973546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.973560 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:04 crc kubenswrapper[4892]: I0122 09:11:04.973570 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:04Z","lastTransitionTime":"2026-01-22T09:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.075988 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.076029 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.076046 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.076064 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.076076 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.178402 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.178995 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.179118 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.179232 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.179472 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.283366 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.283404 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.283415 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.283429 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.283437 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.381326 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:57:35.824494346 +0000 UTC Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.386533 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.386567 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.386578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.386593 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.386605 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.489059 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.489098 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.489109 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.489124 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.489136 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.520435 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.591727 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.591871 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.591890 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.591912 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.591958 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.668262 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/0.log" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.673674 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47" exitCode=1 Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.673801 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.674915 4892 scope.go:117] "RemoveContainer" containerID="486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.695800 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.695878 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.695904 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.695937 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.695962 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.698413 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.729622 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"message\\\":\\\"22 09:11:05.515352 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 09:11:05.515392 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 09:11:05.515407 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 09:11:05.515457 6221 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:11:05.515495 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 09:11:05.515521 6221 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:05.515536 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 09:11:05.515556 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:05.515575 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:05.515595 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:05.515608 6221 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:05.515622 6221 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:05.515610 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 09:11:05.515473 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.757126 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.773503 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.795732 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.798250 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.798315 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.798327 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.798340 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.798350 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.814416 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.833793 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.851254 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.871939 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.892850 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.902470 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.902564 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.902587 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.902618 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.902643 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:05Z","lastTransitionTime":"2026-01-22T09:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.905802 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.920323 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.931231 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.945338 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.981173 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:11:05 crc kubenswrapper[4892]: I0122 09:11:05.992861 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.004852 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.004884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.004894 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.004909 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.004921 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.010904 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"message\\\":\\\"22 09:11:05.515352 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 09:11:05.515392 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 09:11:05.515407 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 09:11:05.515457 6221 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:11:05.515495 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 09:11:05.515521 6221 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:05.515536 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 09:11:05.515556 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:05.515575 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:05.515595 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:05.515608 6221 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:05.515622 6221 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:05.515610 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 09:11:05.515473 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.027621 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.080897 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.094710 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.106546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.106584 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.106596 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.106611 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.106622 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.109590 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.123211 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.134932 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.135047 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.135069 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135096 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:11:22.135067134 +0000 UTC m=+51.979146197 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.135139 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135153 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135199 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:22.135186367 +0000 UTC m=+51.979265430 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135237 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135252 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135258 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135269 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135273 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135296 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135323 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:22.13531488 +0000 UTC m=+51.979393943 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.135337 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:22.135331711 +0000 UTC m=+51.979410774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.136179 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.146937 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.157314 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.165974 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.174302 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.185398 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.195890 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.208101 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.208129 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.208137 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.208150 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.208173 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.235785 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.235897 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.235961 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:22.235945321 +0000 UTC m=+52.080024384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.310005 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.310039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.310054 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.310066 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.310074 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.382423 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:16:18.496769392 +0000 UTC Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.413863 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.413951 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.413969 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.413994 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.414010 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.418111 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.418169 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.418198 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.418325 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.418417 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:06 crc kubenswrapper[4892]: E0122 09:11:06.418555 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.516854 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.516905 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.516921 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.516936 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.516946 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.620045 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.620125 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.620140 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.620157 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.620192 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.680396 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/0.log" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.683560 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.684135 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.696819 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.712165 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.724232 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.724270 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.724278 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.724313 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.724323 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.725583 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.749122 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.767469 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.780890 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.792038 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.806013 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.822361 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.826411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.826441 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.826464 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.826477 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.826486 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.834917 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.851478 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"message\\\":\\\"22 09:11:05.515352 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 09:11:05.515392 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 09:11:05.515407 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 09:11:05.515457 6221 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:11:05.515495 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 09:11:05.515521 6221 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:05.515536 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 09:11:05.515556 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:05.515575 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:05.515595 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:05.515608 6221 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:05.515622 6221 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:05.515610 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 09:11:05.515473 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.864110 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.877134 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.890877 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.929303 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.929363 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.929373 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.929387 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:06 crc kubenswrapper[4892]: I0122 09:11:06.929396 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:06Z","lastTransitionTime":"2026-01-22T09:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.032188 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.032254 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.032266 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.032301 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.032313 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.135267 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.135357 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.135380 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.135406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.135425 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.238598 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.238704 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.238723 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.238748 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.238765 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.344338 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.344384 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.344394 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.344409 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.344420 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.383357 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:27:15.660830666 +0000 UTC Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.446695 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.446725 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.446733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.446746 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.446755 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.549465 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.549512 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.549528 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.549551 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.549568 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.652368 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.652406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.652414 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.652428 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.652442 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.690362 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/1.log" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.691910 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/0.log" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.695048 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0" exitCode=1 Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.695088 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.695139 4892 scope.go:117] "RemoveContainer" containerID="486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.696326 4892 scope.go:117] "RemoveContainer" containerID="510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0" Jan 22 09:11:07 crc kubenswrapper[4892]: E0122 09:11:07.696770 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.709109 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.735752 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"message\\\":\\\"22 09:11:05.515352 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 09:11:05.515392 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 09:11:05.515407 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 09:11:05.515457 6221 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:11:05.515495 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 09:11:05.515521 6221 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:05.515536 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 09:11:05.515556 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:05.515575 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:05.515595 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:05.515608 6221 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:05.515622 6221 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:05.515610 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 09:11:05.515473 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.748885 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.755427 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.755464 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.755473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.755489 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.755499 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.761997 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.773447 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.783886 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.795652 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.810686 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.823738 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.837212 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.847332 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.858023 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.858080 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.858092 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.858107 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.858119 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.859417 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.868961 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.882281 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.961276 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.961357 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.961368 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.961387 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:07 crc kubenswrapper[4892]: I0122 09:11:07.961403 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:07Z","lastTransitionTime":"2026-01-22T09:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.063923 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.063963 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.063975 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.063990 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.064001 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.166209 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.166403 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.166425 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.166451 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.166468 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.269446 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.269479 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.269493 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.269512 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.269526 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.373045 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.373581 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.373759 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.373962 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.374118 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.384320 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:52:45.746917594 +0000 UTC Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.417638 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.417774 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.418228 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:08 crc kubenswrapper[4892]: E0122 09:11:08.418483 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:08 crc kubenswrapper[4892]: E0122 09:11:08.418619 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:08 crc kubenswrapper[4892]: E0122 09:11:08.418690 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.477407 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.477443 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.477451 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.477469 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.477479 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.580543 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.580975 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.581147 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.581364 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.581540 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.603345 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf"] Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.604385 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.606679 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.606815 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.619546 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.632203 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.644793 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.659395 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.661830 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.661912 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.661941 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.661976 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882n2\" (UniqueName: \"kubernetes.io/projected/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-kube-api-access-882n2\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.674120 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.684265 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.684323 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.684336 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.684355 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.684369 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.688063 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.699272 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/1.log" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.703042 4892 scope.go:117] "RemoveContainer" containerID="510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0" Jan 22 09:11:08 crc kubenswrapper[4892]: E0122 09:11:08.703444 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.705523 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.716872 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.730449 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.743772 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.754598 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.762871 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.762985 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.763023 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.763053 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882n2\" (UniqueName: \"kubernetes.io/projected/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-kube-api-access-882n2\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.764201 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.764334 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.771556 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.771984 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.781753 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882n2\" (UniqueName: \"kubernetes.io/projected/20e79d60-66bf-44b6-8e7c-f8d995b5cc79-kube-api-access-882n2\") pod \"ovnkube-control-plane-749d76644c-ntkhf\" (UID: \"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.786855 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.786890 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.786900 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.786914 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.786924 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.796391 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://486331c87a00f1a615c742b0b281e00e3babb521c17a58b73f3eb4777e502a47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"message\\\":\\\"22 09:11:05.515352 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 09:11:05.515392 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 09:11:05.515407 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 09:11:05.515457 6221 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:11:05.515495 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 09:11:05.515521 6221 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:05.515536 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 09:11:05.515556 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:05.515575 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:05.515595 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:05.515608 6221 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:05.515622 6221 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:05.515610 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 09:11:05.515473 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.812648 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.824239 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.835866 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.846845 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.861008 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.873878 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.886142 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.888780 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.888825 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.888841 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.888863 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.888880 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.898005 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.912424 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.917791 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.929240 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: W0122 09:11:08.933811 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20e79d60_66bf_44b6_8e7c_f8d995b5cc79.slice/crio-aefe73f4b4c43d7a26f4a60c0c7cc7a5e5ec4e948d904d85f93060fe22dc6739 WatchSource:0}: Error finding container aefe73f4b4c43d7a26f4a60c0c7cc7a5e5ec4e948d904d85f93060fe22dc6739: Status 404 returned error can't find the container with id aefe73f4b4c43d7a26f4a60c0c7cc7a5e5ec4e948d904d85f93060fe22dc6739 Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.946916 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.974143 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.991546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.991585 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.991597 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.991612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.991622 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:08Z","lastTransitionTime":"2026-01-22T09:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:08 crc kubenswrapper[4892]: I0122 09:11:08.992091 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:08Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.004005 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.018760 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.035024 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.046637 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.093969 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.094016 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.094027 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.094039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.094048 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.197258 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.197312 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.197321 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.197336 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.197345 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.300725 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.301120 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.301140 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.301226 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.301321 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.386429 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:59:45.377187608 +0000 UTC Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.403851 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.403885 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.403899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.403918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.403932 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.506371 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.506416 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.506428 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.506442 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.506454 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.609127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.609175 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.609185 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.609199 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.609209 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.706659 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" event={"ID":"20e79d60-66bf-44b6-8e7c-f8d995b5cc79","Type":"ContainerStarted","Data":"75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.706716 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" event={"ID":"20e79d60-66bf-44b6-8e7c-f8d995b5cc79","Type":"ContainerStarted","Data":"754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.706733 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" event={"ID":"20e79d60-66bf-44b6-8e7c-f8d995b5cc79","Type":"ContainerStarted","Data":"aefe73f4b4c43d7a26f4a60c0c7cc7a5e5ec4e948d904d85f93060fe22dc6739"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.711109 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.711146 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.711157 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.711172 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.711182 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.718073 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.725234 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5nnld"] Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.725831 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:09 crc kubenswrapper[4892]: E0122 09:11:09.725882 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.728661 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.740247 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.751135 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.765213 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.772612 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.772744 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99966\" (UniqueName: \"kubernetes.io/projected/f7391f43-09a9-4333-8df2-72d4fdc02615-kube-api-access-99966\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.780004 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.789901 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.803195 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.813777 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.813823 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.813834 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.813852 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.813865 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.814512 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.824009 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.832725 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.844755 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.857136 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.869108 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.873477 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.873555 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99966\" (UniqueName: \"kubernetes.io/projected/f7391f43-09a9-4333-8df2-72d4fdc02615-kube-api-access-99966\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:09 crc kubenswrapper[4892]: E0122 09:11:09.873653 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:09 crc kubenswrapper[4892]: E0122 09:11:09.873733 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:10.373714308 +0000 UTC m=+40.217793391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.887054 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.891320 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99966\" (UniqueName: \"kubernetes.io/projected/f7391f43-09a9-4333-8df2-72d4fdc02615-kube-api-access-99966\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.898131 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.907606 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.916006 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.916049 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.916064 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.916082 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.916092 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:09Z","lastTransitionTime":"2026-01-22T09:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.921028 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.943179 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.957128 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.968108 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.981117 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:09 crc kubenswrapper[4892]: I0122 09:11:09.991159 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:09Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.001699 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.013111 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.018441 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.018472 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.018482 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.018495 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.018503 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.023087 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.032587 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.042901 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.097421 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.106076 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.116214 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:10Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.121230 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.121258 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.121277 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.121304 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.121317 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.222866 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.222903 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.222915 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.222930 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.222943 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.325614 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.325649 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.325658 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.325675 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.325684 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.377585 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:10 crc kubenswrapper[4892]: E0122 09:11:10.377691 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:10 crc kubenswrapper[4892]: E0122 09:11:10.377739 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:11.377726607 +0000 UTC m=+41.221805670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.387183 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:39:34.108309005 +0000 UTC Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.418463 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.418480 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.418461 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:10 crc kubenswrapper[4892]: E0122 09:11:10.418570 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:10 crc kubenswrapper[4892]: E0122 09:11:10.418634 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:10 crc kubenswrapper[4892]: E0122 09:11:10.418777 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.427580 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.427614 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.427623 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.427637 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.427651 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.529699 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.529734 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.529742 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.529759 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.529770 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.632253 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.632326 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.632343 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.632363 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.632381 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.734928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.734970 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.734980 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.734994 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.735003 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.836711 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.836750 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.836762 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.836776 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.836786 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.938908 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.938944 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.938952 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.938964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:10 crc kubenswrapper[4892]: I0122 09:11:10.938973 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:10Z","lastTransitionTime":"2026-01-22T09:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.041191 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.041223 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.041231 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.041244 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.041254 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.143736 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.143772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.143793 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.143809 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.143820 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.246006 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.246043 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.246053 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.246069 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.246082 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.348399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.348439 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.348448 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.348463 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.348472 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.388045 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:34:30.78509477 +0000 UTC Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.388398 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:11 crc kubenswrapper[4892]: E0122 09:11:11.388585 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:11 crc kubenswrapper[4892]: E0122 09:11:11.388706 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:13.38867664 +0000 UTC m=+43.232755743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.418458 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:11 crc kubenswrapper[4892]: E0122 09:11:11.418672 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.435577 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.452804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.453069 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.453445 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.453822 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.453994 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.454494 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.470086 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.484652 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.498174 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.516806 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.528165 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.538404 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.546499 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.555882 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.555916 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.555925 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.555938 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.555950 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.558864 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.567653 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.576477 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.590625 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.609868 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.625072 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.639921 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:11Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.657890 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.657972 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.657983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.657999 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.658009 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.760473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.760515 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.760523 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.760536 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.760546 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.862673 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.862705 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.862714 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.862726 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.862738 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.965218 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.965255 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.965264 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.965278 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:11 crc kubenswrapper[4892]: I0122 09:11:11.965306 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:11Z","lastTransitionTime":"2026-01-22T09:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.067489 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.067546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.067557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.067575 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.067587 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.170411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.170464 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.170476 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.170491 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.170500 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.273758 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.273799 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.273808 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.273822 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.273837 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.376402 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.376453 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.376468 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.376486 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.376498 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.388807 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:49:26.363911701 +0000 UTC Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.418439 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.418475 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.418490 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:12 crc kubenswrapper[4892]: E0122 09:11:12.418548 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:12 crc kubenswrapper[4892]: E0122 09:11:12.418613 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:12 crc kubenswrapper[4892]: E0122 09:11:12.418722 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.479036 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.479091 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.479106 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.479128 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.479145 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.582178 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.582247 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.582271 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.582335 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.582359 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.685002 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.685052 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.685066 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.685086 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.685098 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.788997 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.789043 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.789123 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.789147 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.789164 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.892528 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.892585 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.892602 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.892627 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.892645 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.996517 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.996570 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.996582 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.996601 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:12 crc kubenswrapper[4892]: I0122 09:11:12.996613 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:12Z","lastTransitionTime":"2026-01-22T09:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.099744 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.099807 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.099833 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.099866 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.099893 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.203132 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.203205 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.203231 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.203260 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.203281 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.219368 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.219418 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.219432 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.219450 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.219464 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.236895 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:13Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.241987 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.242054 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.242078 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.242109 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.242134 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.262781 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:13Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.267724 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.267780 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.267798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.267823 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.267841 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.284335 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:13Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.288896 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.288933 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.288945 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.288961 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.288973 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.303590 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:13Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.307998 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.308048 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.308063 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.308081 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.308095 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.323701 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:13Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.323896 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.325978 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.326009 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.326020 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.326035 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.326048 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.389273 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:58:14.454626789 +0000 UTC Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.408911 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.409028 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.409084 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:17.409070277 +0000 UTC m=+47.253149350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.418042 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:13 crc kubenswrapper[4892]: E0122 09:11:13.418179 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.428339 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.428387 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.428405 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.428420 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.428461 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.531011 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.531046 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.531055 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.531068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.531077 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.633349 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.633406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.633421 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.633446 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.633469 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.736884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.736964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.736982 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.737015 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.737032 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.840404 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.840461 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.840479 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.840502 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.840519 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.942737 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.942782 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.942790 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.942804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:13 crc kubenswrapper[4892]: I0122 09:11:13.942823 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:13Z","lastTransitionTime":"2026-01-22T09:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.045399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.045463 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.045482 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.045505 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.045522 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.148364 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.148398 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.148407 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.148422 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.148432 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.250930 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.250967 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.250982 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.251001 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.251012 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.364104 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.364144 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.364156 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.364175 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.364185 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.389827 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:29:13.355828272 +0000 UTC Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.418127 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:14 crc kubenswrapper[4892]: E0122 09:11:14.418233 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.418320 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.418359 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:14 crc kubenswrapper[4892]: E0122 09:11:14.418457 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:14 crc kubenswrapper[4892]: E0122 09:11:14.418541 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.466338 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.466385 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.466401 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.466423 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.466440 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.569503 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.569539 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.569562 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.569579 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.569590 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.674201 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.674234 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.674242 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.674255 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.674263 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.775561 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.775593 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.775601 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.775613 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.775621 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.879187 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.879248 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.879264 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.879311 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.879329 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.982196 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.982232 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.982242 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.982255 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:14 crc kubenswrapper[4892]: I0122 09:11:14.982265 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:14Z","lastTransitionTime":"2026-01-22T09:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.084964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.085056 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.085074 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.085098 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.085117 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.187640 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.187720 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.187741 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.187765 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.187784 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.290607 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.290658 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.290672 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.290691 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.290708 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.390628 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:00:50.74502287 +0000 UTC Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.393546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.393598 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.393616 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.393640 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.393658 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.418088 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:15 crc kubenswrapper[4892]: E0122 09:11:15.418266 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.496544 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.496595 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.496612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.496638 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.496657 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.599623 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.599774 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.599782 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.599804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.599816 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.703298 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.703343 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.703354 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.703371 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.703384 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.806111 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.806144 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.806154 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.806168 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.806177 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.908667 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.908698 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.908706 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.908718 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:15 crc kubenswrapper[4892]: I0122 09:11:15.908727 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:15Z","lastTransitionTime":"2026-01-22T09:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.011039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.011076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.011085 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.011098 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.011108 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.112839 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.112901 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.112918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.112946 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.112966 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.215073 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.215111 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.215120 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.215135 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.215145 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.318401 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.318474 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.318492 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.318520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.318594 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.391615 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:38:55.167639365 +0000 UTC Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.418445 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:16 crc kubenswrapper[4892]: E0122 09:11:16.418577 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.418970 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:16 crc kubenswrapper[4892]: E0122 09:11:16.419047 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.419102 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:16 crc kubenswrapper[4892]: E0122 09:11:16.419176 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.421434 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.421473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.421485 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.421500 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.421511 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.523734 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.523773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.523785 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.523800 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.523812 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.626899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.626936 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.626947 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.626964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.626976 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.729029 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.729080 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.729092 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.729107 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.729119 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.832446 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.832482 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.832491 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.832524 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.832533 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.935022 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.935071 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.935119 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.935146 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:16 crc kubenswrapper[4892]: I0122 09:11:16.935165 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:16Z","lastTransitionTime":"2026-01-22T09:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.037477 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.037522 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.037532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.037549 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.037561 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.140514 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.140572 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.140651 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.140681 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.140701 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.244023 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.244074 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.244091 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.244114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.244131 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.346569 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.346619 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.346635 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.346657 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.346674 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.392733 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:21:09.866618346 +0000 UTC Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.418764 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:17 crc kubenswrapper[4892]: E0122 09:11:17.418948 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.449952 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.450010 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.450026 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.450047 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.450064 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.450898 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:17 crc kubenswrapper[4892]: E0122 09:11:17.451162 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:17 crc kubenswrapper[4892]: E0122 09:11:17.451227 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:25.451209347 +0000 UTC m=+55.295288430 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.552133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.552217 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.552243 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.552272 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.552332 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.656032 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.656108 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.656127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.656153 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.656171 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.758837 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.758884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.758897 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.758916 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.758928 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.861679 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.861720 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.861731 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.861747 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.861760 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.963773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.963829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.963842 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.963857 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:17 crc kubenswrapper[4892]: I0122 09:11:17.963866 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:17Z","lastTransitionTime":"2026-01-22T09:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.066491 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.066550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.066561 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.066576 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.066589 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.168745 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.168796 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.168805 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.168818 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.168828 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.270751 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.270816 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.270826 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.270844 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.270858 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.373891 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.373954 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.373968 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.373994 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.374023 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.393095 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:21:32.942341951 +0000 UTC Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.417736 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.417902 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:18 crc kubenswrapper[4892]: E0122 09:11:18.418043 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.418061 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:18 crc kubenswrapper[4892]: E0122 09:11:18.418192 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:18 crc kubenswrapper[4892]: E0122 09:11:18.418325 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.477214 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.477323 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.477343 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.477372 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.477397 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.580937 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.581015 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.581039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.581068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.581091 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.685345 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.685408 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.685429 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.685462 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.685481 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.788560 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.788597 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.788605 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.788621 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.788630 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.892344 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.892810 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.893010 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.893157 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.893313 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.996982 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.997095 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.997141 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.997182 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:18 crc kubenswrapper[4892]: I0122 09:11:18.997210 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:18Z","lastTransitionTime":"2026-01-22T09:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.101252 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.101360 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.101382 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.101423 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.101445 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.205148 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.205197 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.205214 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.205233 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.205246 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.308870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.308952 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.308975 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.309016 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.309049 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.394551 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:34:29.482553022 +0000 UTC Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.412160 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.412228 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.412243 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.412274 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.412323 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.418905 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:19 crc kubenswrapper[4892]: E0122 09:11:19.419118 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.516500 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.516556 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.516572 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.516598 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.516617 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.619479 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.619584 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.619645 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.619676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.619742 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.722669 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.722723 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.722736 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.722754 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.722770 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.826003 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.826071 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.826089 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.826114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.826134 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.929347 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.929395 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.929407 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.929425 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:19 crc kubenswrapper[4892]: I0122 09:11:19.929824 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:19Z","lastTransitionTime":"2026-01-22T09:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.032351 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.032414 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.032426 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.032441 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.032452 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.137032 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.137108 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.137131 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.137162 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.137186 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.239631 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.239695 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.239712 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.239738 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.239755 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.342402 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.342467 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.342485 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.342508 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.342527 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.395361 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:33:02.042086487 +0000 UTC Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.418063 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.418126 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.418079 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:20 crc kubenswrapper[4892]: E0122 09:11:20.418819 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:20 crc kubenswrapper[4892]: E0122 09:11:20.418860 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:20 crc kubenswrapper[4892]: E0122 09:11:20.419073 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.446621 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.446776 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.446800 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.446828 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.446851 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.550691 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.550750 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.550772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.550804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.550826 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.653945 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.654010 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.654028 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.654051 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.654069 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.756729 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.756770 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.756787 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.756806 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.756820 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.859266 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.859346 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.859360 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.859376 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.859411 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.961420 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.961455 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.961466 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.961483 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:20 crc kubenswrapper[4892]: I0122 09:11:20.961495 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:20Z","lastTransitionTime":"2026-01-22T09:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.063514 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.063542 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.063550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.063562 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.063570 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.168327 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.168402 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.168422 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.168450 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.168468 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.270894 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.270977 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.271014 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.271045 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.271068 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.374578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.374612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.374620 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.374634 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.374645 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.396119 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:52:33.779662462 +0000 UTC Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.417749 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:21 crc kubenswrapper[4892]: E0122 09:11:21.417914 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.442592 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.464989 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.477643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.477678 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.477696 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.477719 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.477730 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.491030 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.510356 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.524765 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.538119 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.552388 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.567693 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.580814 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.580902 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.580929 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.580964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.580988 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.595539 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.616018 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.633121 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.650172 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.663481 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.679666 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.683737 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.683779 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.683790 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.683803 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.683813 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.696993 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.718103 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.788042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.788114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.788137 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.788166 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.788188 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.891921 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.891987 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.892004 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.892030 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.892052 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.995139 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.995211 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.995236 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.995266 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:21 crc kubenswrapper[4892]: I0122 09:11:21.995323 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:21Z","lastTransitionTime":"2026-01-22T09:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.097609 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.097669 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.097686 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.097710 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.097727 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.200082 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.200153 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.200176 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.200205 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.200226 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.213706 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214096 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:11:54.214057635 +0000 UTC m=+84.058136738 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.214187 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.214260 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.214376 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214437 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214467 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214485 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214496 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214551 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:54.214527327 +0000 UTC m=+84.058606420 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214581 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:54.214563988 +0000 UTC m=+84.058643051 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214704 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214756 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214783 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.214874 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:54.214851596 +0000 UTC m=+84.058930709 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.303013 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.303058 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.303073 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.303098 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.303113 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.315240 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.315446 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.315552 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:54.315525868 +0000 UTC m=+84.159605001 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.396894 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:57:50.176890832 +0000 UTC Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.405701 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.405752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.405776 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.405794 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.405806 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.418486 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.418498 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.418512 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.418929 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.419045 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:22 crc kubenswrapper[4892]: E0122 09:11:22.419180 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.419189 4892 scope.go:117] "RemoveContainer" containerID="510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.508730 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.509056 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.509063 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.509074 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.509083 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.611257 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.611334 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.611344 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.611359 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.611370 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.713229 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.713260 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.713268 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.713304 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.713351 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.750662 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/1.log" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.753426 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.754025 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.775137 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.789667 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.806374 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.816595 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.816633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.816644 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.816659 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.816671 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.830225 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.840540 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.876773 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.891094 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.904384 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.913758 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.918518 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.918564 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.918578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.918597 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.918610 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:22Z","lastTransitionTime":"2026-01-22T09:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.926498 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.940072 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.949264 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.961126 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.978494 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:22 crc kubenswrapper[4892]: I0122 09:11:22.992531 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.009551 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.021956 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.022035 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.022057 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.022090 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.022117 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.125031 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.125076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.125086 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.125121 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.125133 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.228520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.228593 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.228614 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.228641 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.228660 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.330890 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.330951 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.330964 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.330984 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.330996 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.397848 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:53:50.882245739 +0000 UTC Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.418315 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.418461 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.433587 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.433656 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.433676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.433729 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.433747 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.536238 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.536352 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.536374 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.536438 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.536452 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.638728 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.638782 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.638794 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.638812 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.638825 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.722541 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.722648 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.722674 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.722709 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.722735 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.744868 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.750609 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.750658 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.750669 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.750684 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.750697 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.756997 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/2.log" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.757541 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/1.log" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.760012 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e" exitCode=1 Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.760046 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.760075 4892 scope.go:117] "RemoveContainer" containerID="510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.762485 4892 scope.go:117] "RemoveContainer" containerID="5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.762793 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.765062 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.775226 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.775330 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.775358 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.775399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.775422 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.776044 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.795646 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.798382 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.806616 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.806702 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.806719 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.806748 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.806767 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.816252 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.820323 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.825163 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.825235 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.825249 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.825276 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.825307 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.833869 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.840741 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: E0122 09:11:23.840901 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.843128 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.843169 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.843187 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.843233 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.843294 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.867049 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510dc3684ca7a49935e312d797d6be46b9febb3dc0cb682405d4634477498ab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:07Z\\\",\\\"message\\\":\\\"er-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:11:06.470526 6364 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0122 09:11:06.470532 6364 services_controller.go:452] Built service default/kubernetes per-node LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.887247 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.903327 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.918487 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.934726 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.946821 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.946878 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.946899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.946928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.946948 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:23Z","lastTransitionTime":"2026-01-22T09:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.948612 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.962686 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:23 crc kubenswrapper[4892]: I0122 09:11:23.983465 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.003692 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.020432 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.039644 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.050651 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.050700 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.050717 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.050743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.050763 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.053656 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.155376 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.155462 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.155488 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.155521 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.155547 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.258081 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.258181 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.258201 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.258225 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.258244 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.360400 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.360481 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.360495 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.360522 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.360542 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.398824 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:30:55.470736475 +0000 UTC Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.418236 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.418423 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:24 crc kubenswrapper[4892]: E0122 09:11:24.418545 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.418722 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:24 crc kubenswrapper[4892]: E0122 09:11:24.418734 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:24 crc kubenswrapper[4892]: E0122 09:11:24.419058 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.463612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.463660 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.463676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.463694 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.463706 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.565991 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.566022 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.566030 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.566044 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.566054 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.668617 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.668670 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.668687 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.668708 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.668723 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.764510 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/2.log" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.767884 4892 scope.go:117] "RemoveContainer" containerID="5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e" Jan 22 09:11:24 crc kubenswrapper[4892]: E0122 09:11:24.768043 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.771049 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.771076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.771086 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.771101 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.771113 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.779827 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.791945 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.801828 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.813605 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.824868 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.836875 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.851519 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.861944 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.873699 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.873741 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.873752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.873771 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.873783 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.876983 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.889945 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.900902 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.910746 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.925269 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.943630 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.957861 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.977459 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.977513 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.977532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.977556 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.977574 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:24Z","lastTransitionTime":"2026-01-22T09:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:24 crc kubenswrapper[4892]: I0122 09:11:24.979333 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.080219 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.080266 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.080277 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.080310 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.080322 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.183171 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.183496 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.183593 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.183719 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.183821 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.287430 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.287678 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.287761 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.287855 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.287979 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.396516 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.396594 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.396618 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.396649 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.396673 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.399776 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:29:36.949001083 +0000 UTC Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.418271 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:25 crc kubenswrapper[4892]: E0122 09:11:25.418546 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.499324 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.499366 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.499378 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.499394 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.499407 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.547684 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:25 crc kubenswrapper[4892]: E0122 09:11:25.547894 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:25 crc kubenswrapper[4892]: E0122 09:11:25.548010 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:11:41.547984547 +0000 UTC m=+71.392063670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.602194 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.602223 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.602232 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.602246 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.602254 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.705140 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.705208 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.705231 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.705260 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.705317 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.808546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.808619 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.808643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.808671 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.808690 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.911316 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.911380 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.911398 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.911423 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:25 crc kubenswrapper[4892]: I0122 09:11:25.911440 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:25Z","lastTransitionTime":"2026-01-22T09:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.014216 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.014262 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.014274 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.014309 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.014322 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.117142 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.117207 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.117230 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.117261 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.117326 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.173521 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.185342 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.189523 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.200653 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.217126 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.220145 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.220177 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.220189 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.220205 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.220217 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.229572 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.255787 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.273847 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.287128 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.306266 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.322331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.322374 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.322383 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.322399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.322409 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.327833 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.348614 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.368387 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.383163 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.395348 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.400519 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:12:07.333270848 +0000 UTC Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.408899 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.417801 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.417869 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:26 crc kubenswrapper[4892]: E0122 09:11:26.417958 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:26 crc kubenswrapper[4892]: E0122 09:11:26.418032 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.418118 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:26 crc kubenswrapper[4892]: E0122 09:11:26.418168 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.423927 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.425503 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.425542 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.425551 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.425566 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.425576 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.441278 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.529750 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.529797 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.529807 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.529826 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.529840 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.632962 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.633274 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.633388 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.633501 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.633624 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.736806 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.737012 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.737092 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.737211 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.737271 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.841136 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.841686 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.841752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.841864 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.842162 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.945962 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.946030 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.946052 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.946083 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:26 crc kubenswrapper[4892]: I0122 09:11:26.946103 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:26Z","lastTransitionTime":"2026-01-22T09:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.050409 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.050516 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.050541 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.050573 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.050599 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.154590 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.154676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.154698 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.154730 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.154751 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.258346 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.258399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.258415 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.258434 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.258447 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.361014 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.361044 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.361070 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.361084 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.361093 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.401554 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:16:26.238467722 +0000 UTC Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.417960 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:27 crc kubenswrapper[4892]: E0122 09:11:27.418095 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.467994 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.468094 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.468121 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.468155 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.468183 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.571223 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.571274 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.571308 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.571331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.571347 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.674508 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.674559 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.674575 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.674597 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.674612 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.776997 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.777038 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.777051 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.777067 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.777078 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.880347 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.880392 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.880402 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.880417 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.880426 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.983229 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.983262 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.983271 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.983296 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:27 crc kubenswrapper[4892]: I0122 09:11:27.983304 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:27Z","lastTransitionTime":"2026-01-22T09:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.086700 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.086743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.086755 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.086770 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.086781 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.189959 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.190010 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.190023 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.190045 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.190059 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.292685 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.292777 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.292800 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.292824 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.292845 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.395659 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.395731 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.395739 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.395756 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.395765 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.402033 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:47:09.871298301 +0000 UTC Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.418346 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.418407 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:28 crc kubenswrapper[4892]: E0122 09:11:28.418464 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.418558 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:28 crc kubenswrapper[4892]: E0122 09:11:28.418757 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:28 crc kubenswrapper[4892]: E0122 09:11:28.418917 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.498742 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.498826 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.498851 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.498892 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.498923 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.601771 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.601856 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.601879 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.601908 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.601928 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.705162 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.705213 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.705226 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.705244 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.705262 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.808129 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.808196 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.808214 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.808245 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.808264 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.910828 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.910898 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.910919 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.910945 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:28 crc kubenswrapper[4892]: I0122 09:11:28.910967 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:28Z","lastTransitionTime":"2026-01-22T09:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.014572 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.014633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.014651 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.014675 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.014693 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.116881 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.116926 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.116937 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.116955 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.116966 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.220081 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.220161 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.220191 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.220223 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.220248 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.322849 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.322923 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.322948 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.322983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.323011 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.425505 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:35:37.223942652 +0000 UTC Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.426045 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:29 crc kubenswrapper[4892]: E0122 09:11:29.426220 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.427948 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.428011 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.428036 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.428066 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.428096 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.530386 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.530426 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.530438 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.530505 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.530520 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.632552 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.633331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.633365 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.633390 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.633403 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.735860 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.735912 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.735924 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.735942 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.735955 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.842837 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.842893 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.842906 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.842923 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.842936 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.945050 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.945736 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.945765 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.945784 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:29 crc kubenswrapper[4892]: I0122 09:11:29.945799 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:29Z","lastTransitionTime":"2026-01-22T09:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.048589 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.048678 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.048712 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.048743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.048767 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.150686 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.150718 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.150728 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.150743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.150755 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.253152 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.253213 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.253233 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.253258 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.253275 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.355408 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.355440 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.355448 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.355461 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.355470 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.418182 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.418211 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:30 crc kubenswrapper[4892]: E0122 09:11:30.418346 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.418371 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:30 crc kubenswrapper[4892]: E0122 09:11:30.418478 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:30 crc kubenswrapper[4892]: E0122 09:11:30.418557 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.426575 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:05:39.417003013 +0000 UTC Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.457711 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.457761 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.457778 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.457798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.457810 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.560694 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.560726 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.560737 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.560750 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.560760 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.663470 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.663508 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.663518 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.663532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.663542 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.766064 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.766108 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.766129 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.766153 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.766171 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.869101 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.869139 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.869150 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.869165 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.869177 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.972983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.973050 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.973069 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.973092 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:30 crc kubenswrapper[4892]: I0122 09:11:30.973110 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:30Z","lastTransitionTime":"2026-01-22T09:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.076342 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.076640 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.076781 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.077066 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.077345 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.179652 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.179927 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.180020 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.180112 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.180193 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.283166 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.283216 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.283230 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.283252 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.283264 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.385660 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.386030 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.386189 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.386370 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.386547 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.417637 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:31 crc kubenswrapper[4892]: E0122 09:11:31.417811 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.427176 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:05:14.122541068 +0000 UTC Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.436809 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.450946 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.463438 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.479137 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.490052 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.490101 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.490114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.490134 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.490149 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.504390 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.523345 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.538609 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.551255 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.562111 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.571996 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.580565 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.591186 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.592677 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.592764 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.592781 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.592832 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.592855 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.603046 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.615518 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.629829 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.642810 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.657876 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.695568 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.695625 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.695643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.695668 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.695687 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.799366 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.799416 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.799426 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.799443 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.799456 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.902016 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.902093 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.902105 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.902171 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:31 crc kubenswrapper[4892]: I0122 09:11:31.902189 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:31Z","lastTransitionTime":"2026-01-22T09:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.004643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.004709 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.004722 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.004739 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.004751 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.107527 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.107617 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.107636 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.107661 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.107679 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.210798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.210865 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.210887 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.210928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.210950 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.313331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.313361 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.313371 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.313385 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.313398 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.416136 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.416208 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.416219 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.416232 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.416243 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.418487 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:32 crc kubenswrapper[4892]: E0122 09:11:32.418604 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.418765 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:32 crc kubenswrapper[4892]: E0122 09:11:32.418834 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.418978 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:32 crc kubenswrapper[4892]: E0122 09:11:32.419047 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.427866 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:25:16.334838644 +0000 UTC Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.519569 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.519673 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.519699 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.519733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.519759 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.622524 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.622637 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.622655 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.622682 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.622700 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.725751 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.725816 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.725831 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.725853 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.725871 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.828556 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.828633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.828661 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.828688 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.828710 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.931638 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.931712 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.931733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.931761 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:32 crc kubenswrapper[4892]: I0122 09:11:32.931785 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:32Z","lastTransitionTime":"2026-01-22T09:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.034644 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.034713 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.034732 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.034788 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.034817 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.138951 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.139042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.139068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.139107 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.139136 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.241762 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.241805 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.241821 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.241842 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.241857 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.345057 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.345118 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.345133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.345155 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.345174 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.418444 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:33 crc kubenswrapper[4892]: E0122 09:11:33.418603 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.428852 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:08:04.328962937 +0000 UTC Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.449266 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.449379 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.449399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.449430 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.449452 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.553453 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.553550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.553572 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.553605 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.553625 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.658427 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.658521 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.658564 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.658604 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.658631 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.760930 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.760983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.760998 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.761021 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.761038 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.863202 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.863238 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.863246 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.863259 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:33 crc kubenswrapper[4892]: I0122 09:11:33.863267 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:33Z","lastTransitionTime":"2026-01-22T09:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.365114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.365183 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.365200 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.365224 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.365247 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.368685 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.368725 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.368743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.368779 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.368803 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.386981 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:34Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.392473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.392514 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.392525 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.392541 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.392551 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.405358 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:34Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.410244 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.410352 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.410379 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.410405 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.410428 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.417578 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.417609 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.417654 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.417686 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.417799 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.417837 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.428333 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:34Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.429072 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:39:31.718309754 +0000 UTC Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.451127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.451166 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.451178 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.451195 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.451209 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.469232 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:34Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.473457 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.473491 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.473502 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.473518 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.473531 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.489780 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:34Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:34 crc kubenswrapper[4892]: E0122 09:11:34.489942 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.491924 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.491968 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.491992 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.492011 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.492020 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.595076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.595127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.595139 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.595158 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.595174 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.699716 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.699814 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.699840 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.699876 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.699904 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.803096 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.803166 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.803184 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.803212 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.803231 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.906505 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.906554 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.906565 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.906582 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:34 crc kubenswrapper[4892]: I0122 09:11:34.906593 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:34Z","lastTransitionTime":"2026-01-22T09:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.009769 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.009865 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.009894 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.009932 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.009969 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.112435 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.112489 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.112503 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.112522 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.112535 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.215660 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.215714 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.215726 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.215747 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.215758 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.318246 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.318308 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.318321 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.318336 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.318348 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.418371 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:35 crc kubenswrapper[4892]: E0122 09:11:35.418527 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.420043 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.420072 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.420085 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.420103 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.420115 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.429481 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:43:34.226990956 +0000 UTC Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.522915 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.522981 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.522997 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.523014 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.523030 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.625992 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.626055 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.626073 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.626101 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.626122 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.727999 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.728046 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.728058 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.728076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.728090 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.831155 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.831195 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.831203 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.831219 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.831232 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.933240 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.933313 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.933322 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.933337 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:35 crc kubenswrapper[4892]: I0122 09:11:35.933346 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:35Z","lastTransitionTime":"2026-01-22T09:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.035994 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.036028 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.036039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.036056 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.036068 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.138133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.138175 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.138184 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.138197 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.138207 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.241556 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.241596 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.241607 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.241631 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.241640 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.344276 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.344371 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.344396 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.344427 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.344450 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.417729 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.417888 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:36 crc kubenswrapper[4892]: E0122 09:11:36.417998 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:36 crc kubenswrapper[4892]: E0122 09:11:36.418699 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.418743 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:36 crc kubenswrapper[4892]: E0122 09:11:36.418971 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.419105 4892 scope.go:117] "RemoveContainer" containerID="5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e" Jan 22 09:11:36 crc kubenswrapper[4892]: E0122 09:11:36.419535 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.430324 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:12:52.545194049 +0000 UTC Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.447300 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.447338 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.447351 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.447411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.447426 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.550612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.550670 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.550681 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.550701 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.550711 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.653793 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.653850 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.653868 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.653891 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.653908 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.756745 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.756812 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.756834 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.756860 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.756877 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.859648 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.859733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.859752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.859810 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.859831 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.963861 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.963928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.963944 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.963965 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:36 crc kubenswrapper[4892]: I0122 09:11:36.963979 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:36Z","lastTransitionTime":"2026-01-22T09:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.067562 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.067610 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.067619 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.067634 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.067644 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.170399 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.170463 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.170481 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.170507 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.170524 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.272294 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.272335 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.272343 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.272358 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.272366 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.374790 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.374827 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.374838 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.374855 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.374866 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.418359 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:37 crc kubenswrapper[4892]: E0122 09:11:37.418461 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.430697 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:37:54.191647474 +0000 UTC Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.477490 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.477524 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.477535 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.477551 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.477563 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.579423 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.579473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.579484 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.579499 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.579509 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.682650 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.682708 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.682716 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.682730 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.682740 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.785712 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.785773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.785785 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.785801 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.785813 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.887659 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.887704 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.887714 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.887729 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.887740 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.990251 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.990313 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.990325 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.990340 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:37 crc kubenswrapper[4892]: I0122 09:11:37.990353 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:37Z","lastTransitionTime":"2026-01-22T09:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.093062 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.093120 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.093129 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.093149 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.093164 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.195918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.195975 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.195991 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.196012 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.196029 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.299024 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.299096 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.299113 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.299142 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.299160 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.402252 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.402328 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.402341 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.402362 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.402376 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.417783 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.417816 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.417792 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:38 crc kubenswrapper[4892]: E0122 09:11:38.417908 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:38 crc kubenswrapper[4892]: E0122 09:11:38.418035 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:38 crc kubenswrapper[4892]: E0122 09:11:38.418237 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.430889 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:57:52.01770747 +0000 UTC Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.504807 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.504875 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.504890 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.504910 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.504923 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.607467 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.607506 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.607515 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.607528 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.607538 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.709633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.709674 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.709686 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.709706 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.709721 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.815013 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.815063 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.815076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.815095 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.815105 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.917592 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.917629 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.917638 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.917652 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:38 crc kubenswrapper[4892]: I0122 09:11:38.917664 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:38Z","lastTransitionTime":"2026-01-22T09:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.019904 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.019933 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.019944 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.019958 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.019969 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.122275 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.122365 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.122379 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.122401 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.122416 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.224406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.224454 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.224465 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.224480 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.224492 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.326339 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.326385 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.326396 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.326411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.326423 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.418306 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:39 crc kubenswrapper[4892]: E0122 09:11:39.418481 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.429023 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.429064 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.429075 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.429089 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.429101 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.431341 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:42:49.974609037 +0000 UTC Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.531144 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.531188 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.531203 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.531220 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.531232 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.633531 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.633571 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.633585 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.633599 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.633608 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.735510 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.735545 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.735554 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.735566 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.735574 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.836814 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.836854 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.836862 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.836876 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.836886 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.938620 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.938652 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.938662 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.938676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:39 crc kubenswrapper[4892]: I0122 09:11:39.938687 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:39Z","lastTransitionTime":"2026-01-22T09:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.040877 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.040975 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.041006 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.041036 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.041056 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.143250 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.143302 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.143311 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.143328 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.143338 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.245995 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.246083 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.246104 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.246125 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.246138 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.348929 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.349133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.349163 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.349193 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.349219 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.418552 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.418552 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:40 crc kubenswrapper[4892]: E0122 09:11:40.418694 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.418565 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:40 crc kubenswrapper[4892]: E0122 09:11:40.418746 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:40 crc kubenswrapper[4892]: E0122 09:11:40.418918 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.431588 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:13:43.220802749 +0000 UTC Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.452097 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.452134 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.452143 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.452159 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.452168 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.554478 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.554515 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.554523 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.554536 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.554545 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.656502 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.656731 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.656742 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.656758 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.656771 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.759117 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.759159 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.759170 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.759187 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.759201 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.861544 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.861588 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.861599 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.861621 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.861633 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.963868 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.963913 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.963926 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.963946 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:40 crc kubenswrapper[4892]: I0122 09:11:40.963959 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:40Z","lastTransitionTime":"2026-01-22T09:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.066377 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.066420 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.066431 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.066447 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.066458 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.168358 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.168393 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.168401 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.168414 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.168423 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.270870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.270918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.270935 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.270961 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.270978 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.373970 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.374027 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.374044 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.374071 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.374090 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.419015 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:41 crc kubenswrapper[4892]: E0122 09:11:41.419403 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.432166 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:47:31.845761949 +0000 UTC Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.434224 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.446564 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.463503 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.473519 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.476393 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.476424 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.476436 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.476451 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.476461 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.484263 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.497081 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.512915 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.526781 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.542813 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.551751 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:41 crc kubenswrapper[4892]: E0122 09:11:41.551915 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:41 crc kubenswrapper[4892]: E0122 09:11:41.551977 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:12:13.551964762 +0000 UTC m=+103.396043825 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.555741 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.573058 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.579108 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.579135 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.579145 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.579160 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.579171 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.585915 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.598725 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.609501 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.626931 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.651009 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.664531 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:41Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.681387 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.681422 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.681433 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.681469 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.681482 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.783532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.783576 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.783589 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.783604 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.783616 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.886054 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.886085 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.886095 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.886110 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.886121 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.988960 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.988997 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.989006 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.989019 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:41 crc kubenswrapper[4892]: I0122 09:11:41.989028 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:41Z","lastTransitionTime":"2026-01-22T09:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.091651 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.091684 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.091693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.091705 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.091714 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.194440 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.194777 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.194790 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.194810 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.194822 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.297738 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.297787 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.297803 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.297824 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.297841 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.401161 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.401217 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.401233 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.401256 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.401315 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.417930 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.418037 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.417946 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:42 crc kubenswrapper[4892]: E0122 09:11:42.418220 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:42 crc kubenswrapper[4892]: E0122 09:11:42.418314 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:42 crc kubenswrapper[4892]: E0122 09:11:42.418064 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.433229 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:37:02.319691706 +0000 UTC Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.503671 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.503702 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.503709 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.503722 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.503764 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.606492 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.606524 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.606532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.606544 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.606553 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.708659 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.708743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.708757 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.708773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.708784 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.811573 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.811609 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.811619 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.811636 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.811647 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.823173 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/0.log" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.823211 4892 generic.go:334] "Generic (PLEG): container finished" podID="80ef00cc-97bb-4f08-ba72-3947ab29043f" containerID="e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b" exitCode=1 Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.823257 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerDied","Data":"e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.823950 4892 scope.go:117] "RemoveContainer" containerID="e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.837628 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.862273 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.881866 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.892956 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.905347 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.913573 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.913598 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.913608 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.913622 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.913633 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:42Z","lastTransitionTime":"2026-01-22T09:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.916928 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.927845 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.937769 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.948913 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.963375 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.974785 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.987308 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"2026-01-22T09:10:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be\\\\n2026-01-22T09:10:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be to /host/opt/cni/bin/\\\\n2026-01-22T09:10:57Z [verbose] multus-daemon started\\\\n2026-01-22T09:10:57Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:11:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:42 crc kubenswrapper[4892]: I0122 09:11:42.996881 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.008261 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.015772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.015817 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.015829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.015846 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.015860 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.019344 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.029483 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.038595 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.117633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.117698 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.117707 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.117719 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.117727 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.219418 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.219450 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.219459 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.219473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.219483 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.321577 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.321622 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.321632 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.321648 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.321660 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.417691 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:43 crc kubenswrapper[4892]: E0122 09:11:43.417815 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.425733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.425761 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.425770 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.425780 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.425789 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.433433 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:24:35.752626547 +0000 UTC Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.527501 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.527535 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.527544 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.527557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.527568 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.630268 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.630322 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.630331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.630346 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.630355 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.732557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.732593 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.732601 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.732614 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.732624 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.827546 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/0.log" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.827606 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerStarted","Data":"d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.834833 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.834866 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.834876 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.834888 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.834897 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.841344 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.860854 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.873254 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.886441 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.897060 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.907686 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.917972 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.927682 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.937200 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.937252 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.937265 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.937312 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.937329 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:43Z","lastTransitionTime":"2026-01-22T09:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.938649 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.949688 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.959961 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.970139 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"2026-01-22T09:10:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be\\\\n2026-01-22T09:10:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be to /host/opt/cni/bin/\\\\n2026-01-22T09:10:57Z [verbose] multus-daemon started\\\\n2026-01-22T09:10:57Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:11:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.978513 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.989719 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:43 crc kubenswrapper[4892]: I0122 09:11:43.999132 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.007886 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.016148 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.040145 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.040180 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.040191 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.040206 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.040217 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.142836 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.142885 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.142897 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.142919 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.142931 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.245304 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.245347 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.245359 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.245376 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.245390 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.347557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.347619 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.347641 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.347664 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.347681 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.417909 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.417982 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.418026 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.418052 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.418107 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.418230 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.433951 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:18:22.662768443 +0000 UTC Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.449486 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.449522 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.449531 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.449546 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.449554 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.539844 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.539874 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.539882 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.539894 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.539923 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.554898 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.558259 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.558315 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.558324 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.558334 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.558361 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.573966 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.577915 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.577939 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.577948 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.577959 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.577969 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.594851 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.599020 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.599071 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.599086 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.599107 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.599121 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.611276 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.614064 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.614095 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.614106 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.614121 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.614134 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.627895 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:44 crc kubenswrapper[4892]: E0122 09:11:44.628005 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.629718 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.629744 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.629752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.629766 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.629775 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.732211 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.732484 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.732582 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.732693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.732780 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.834928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.834970 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.834983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.835000 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.835016 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.937336 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.937691 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.937924 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.938143 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:44 crc kubenswrapper[4892]: I0122 09:11:44.938412 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:44Z","lastTransitionTime":"2026-01-22T09:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.041448 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.041509 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.041548 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.041592 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.041616 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.144442 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.144483 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.144494 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.144508 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.144517 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.246805 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.247033 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.247099 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.247157 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.247210 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.349573 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.349663 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.349734 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.349770 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.349866 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.418242 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:45 crc kubenswrapper[4892]: E0122 09:11:45.418474 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.434970 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:46:01.361643587 +0000 UTC Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.452010 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.452061 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.452073 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.452089 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.452103 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.555633 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.555757 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.555778 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.555804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.555824 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.659163 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.659215 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.659309 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.659335 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.659352 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.762510 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.762584 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.762607 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.762636 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.762656 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.864696 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.864760 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.864774 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.864794 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.864805 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.967013 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.967353 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.967496 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.967629 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:45 crc kubenswrapper[4892]: I0122 09:11:45.967764 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:45Z","lastTransitionTime":"2026-01-22T09:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.070168 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.070231 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.070271 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.070348 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.070372 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.173233 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.173281 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.173309 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.173327 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.173343 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.275795 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.275857 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.275878 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.275902 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.275919 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.378357 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.378411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.378424 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.378445 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.378459 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.418441 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.418527 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.418441 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:46 crc kubenswrapper[4892]: E0122 09:11:46.418745 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:46 crc kubenswrapper[4892]: E0122 09:11:46.418659 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:46 crc kubenswrapper[4892]: E0122 09:11:46.418547 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.435427 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:04:28.675494082 +0000 UTC Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.480776 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.480821 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.480830 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.480848 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.480860 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.583868 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.583928 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.583950 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.583980 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.584004 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.686476 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.686554 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.686578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.686611 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.686636 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.788609 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.788638 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.788645 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.788659 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.788668 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.890680 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.890721 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.890733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.890748 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.890759 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.993255 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.993307 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.993319 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.993334 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:46 crc kubenswrapper[4892]: I0122 09:11:46.993347 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:46Z","lastTransitionTime":"2026-01-22T09:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.096599 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.096667 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.096685 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.096709 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.096727 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.200053 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.200112 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.200136 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.200163 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.200183 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.304084 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.304153 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.304178 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.304207 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.304230 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.406900 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.406952 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.406969 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.406991 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.407008 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.418654 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:47 crc kubenswrapper[4892]: E0122 09:11:47.418871 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.436466 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:53:17.34896788 +0000 UTC Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.508869 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.508916 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.508925 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.508940 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.508949 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.611579 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.611609 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.611617 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.611630 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.611641 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.713354 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.713383 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.713392 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.713404 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.713413 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.815518 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.815559 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.815572 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.815589 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.815599 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.918241 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.918308 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.918320 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.918336 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:47 crc kubenswrapper[4892]: I0122 09:11:47.918348 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:47Z","lastTransitionTime":"2026-01-22T09:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.020761 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.020798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.020809 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.020823 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.020833 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.123471 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.123519 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.123532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.123547 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.123557 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.225860 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.225896 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.225904 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.225917 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.225926 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.327973 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.328037 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.328061 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.328090 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.328114 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.418412 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.418497 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:48 crc kubenswrapper[4892]: E0122 09:11:48.418556 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:48 crc kubenswrapper[4892]: E0122 09:11:48.418669 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.418736 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:48 crc kubenswrapper[4892]: E0122 09:11:48.418811 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.430175 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.430202 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.430211 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.430224 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.430242 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.437413 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:21:06.713388162 +0000 UTC Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.532025 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.532057 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.532065 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.532077 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.532086 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.634693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.634728 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.634738 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.634751 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.634769 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.737028 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.737065 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.737073 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.737086 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.737095 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.839681 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.839772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.839802 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.839831 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.839849 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.942901 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.943004 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.943039 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.943070 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:48 crc kubenswrapper[4892]: I0122 09:11:48.943091 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:48Z","lastTransitionTime":"2026-01-22T09:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.046410 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.046472 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.046491 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.046515 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.046532 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.149468 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.149504 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.149516 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.149530 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.149542 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.252464 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.252506 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.252517 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.252536 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.252548 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.355987 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.356044 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.356067 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.356098 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.356119 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.418412 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:49 crc kubenswrapper[4892]: E0122 09:11:49.418667 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.438150 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:40:55.048404068 +0000 UTC Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.459773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.459811 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.459823 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.459840 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.459851 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.562466 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.562531 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.562549 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.562595 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.562614 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.664548 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.664617 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.664635 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.664662 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.664680 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.767776 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.767804 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.767813 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.767824 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.767833 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.869942 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.870013 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.870057 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.870085 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.870102 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.973022 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.973082 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.973099 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.973121 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:49 crc kubenswrapper[4892]: I0122 09:11:49.973138 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:49Z","lastTransitionTime":"2026-01-22T09:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.076575 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.076634 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.076642 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.076657 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.076668 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.180151 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.180216 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.180234 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.180262 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.180309 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.283386 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.283531 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.283554 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.283579 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.283641 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.386675 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.386711 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.386726 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.386744 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.386758 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.418645 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.418658 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:50 crc kubenswrapper[4892]: E0122 09:11:50.418798 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.418665 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:50 crc kubenswrapper[4892]: E0122 09:11:50.419142 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:50 crc kubenswrapper[4892]: E0122 09:11:50.419216 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.419575 4892 scope.go:117] "RemoveContainer" containerID="5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.438396 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:51:28.7729323 +0000 UTC Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.490483 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.490530 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.490550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.490572 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.490591 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.595102 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.595176 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.595203 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.595232 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.595254 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.697976 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.698068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.698082 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.698117 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.698130 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.800805 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.800871 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.800883 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.800899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.800911 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.861425 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/2.log" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.864922 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.865546 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.876996 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.891531 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.902069 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.903453 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.903493 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.903502 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.903520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.903531 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:50Z","lastTransitionTime":"2026-01-22T09:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.912426 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.922674 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.939234 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.952882 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.964575 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.977315 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:50 crc kubenswrapper[4892]: I0122 09:11:50.991678 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:50Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.005743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.005769 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.005778 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.005792 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.005803 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.006328 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.017328 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.031635 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.045066 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.058741 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.070488 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"2026-01-22T09:10:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be\\\\n2026-01-22T09:10:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be to /host/opt/cni/bin/\\\\n2026-01-22T09:10:57Z [verbose] multus-daemon started\\\\n2026-01-22T09:10:57Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:11:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.085011 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.108054 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.108117 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.108130 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.108149 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.108161 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.210204 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.210263 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.210277 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.210319 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.210334 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.312070 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.312104 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.312115 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.312129 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.312139 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.414496 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.414548 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.414559 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.414578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.414589 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.417803 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:51 crc kubenswrapper[4892]: E0122 09:11:51.417952 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.434621 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.439466 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:06:51.96150259 +0000 UTC Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.447482 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.460004 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.475426 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"2026-01-22T09:10:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be\\\\n2026-01-22T09:10:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be to /host/opt/cni/bin/\\\\n2026-01-22T09:10:57Z [verbose] multus-daemon started\\\\n2026-01-22T09:10:57Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:11:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.484912 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.499925 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.514676 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.516783 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.516867 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.516913 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.516940 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.516960 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.527435 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.537056 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.548703 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.570794 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.585892 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.597905 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.609372 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.619763 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.619797 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.619808 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.619824 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.619837 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.622409 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.632055 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.643677 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.722766 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.722814 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.722829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.722848 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.722860 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.825414 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.825464 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.825477 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.825497 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.825512 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.870984 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/3.log" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.871616 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/2.log" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.875125 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" exitCode=1 Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.875169 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.875216 4892 scope.go:117] "RemoveContainer" containerID="5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.875765 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:11:51 crc kubenswrapper[4892]: E0122 09:11:51.875947 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.892516 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.904420 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.919545 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"2026-01-22T09:10:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be\\\\n2026-01-22T09:10:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be to /host/opt/cni/bin/\\\\n2026-01-22T09:10:57Z [verbose] multus-daemon started\\\\n2026-01-22T09:10:57Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:11:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.927193 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.927241 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.927253 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.927272 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.927304 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:51Z","lastTransitionTime":"2026-01-22T09:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.928420 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.939574 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.953143 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.964534 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.972768 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:51 crc kubenswrapper[4892]: I0122 09:11:51.984898 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:51Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.002227 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a61dd8b6b26ae316bf2376a89151a98f6bda2fa88d9e28440b6b40f23efb27e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:23Z\\\",\\\"message\\\":\\\"andler 8\\\\nI0122 09:11:23.361739 6586 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:11:23.361753 6586 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 09:11:23.361743 6586 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 09:11:23.361756 6586 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 09:11:23.361788 6586 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361851 6586 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.361938 6586 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362106 6586 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362180 6586 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362302 6586 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:11:23.362865 6586 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 09:11:23.362909 6586 factory.go:656] Stopping watch factory\\\\nI0122 09:11:23.362924 6586 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:51Z\\\",\\\"message\\\":\\\"-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:11:51.249991 6984 factory.go:656] Stopping watch factory\\\\nI0122 09:11:51.250023 6984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:51.250318 6984 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI0122 09:11:51.250349 6984 services_controller.go:360] Finished syncing service metrics on namespace openshift-etcd-operator for network=default : 1.323995ms\\\\nI0122 09:11:51.250381 6984 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0122 09:11:51.250391 6984 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 1.381076ms\\\\nI0122 09:11:51.250572 6984 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 09:11:51.250769 6984 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 09:11:51.250816 6984 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:11:51.250844 6984 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:11:51.250932 6984 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.020605 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.029378 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.029438 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.029454 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.029474 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.029487 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.034898 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.047571 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.063752 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.077857 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.094990 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.109747 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.132220 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.132268 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.132279 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.132314 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.132329 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.234967 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.235026 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.235041 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.235068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.235081 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.338685 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.338785 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.338808 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.338843 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.338869 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.418334 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.418409 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.418408 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:52 crc kubenswrapper[4892]: E0122 09:11:52.418634 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:52 crc kubenswrapper[4892]: E0122 09:11:52.418859 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:52 crc kubenswrapper[4892]: E0122 09:11:52.418997 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.439629 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:33:15.19067568 +0000 UTC Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.442442 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.442486 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.442498 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.442522 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.442541 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.544693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.544728 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.544746 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.544766 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.544780 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.647796 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.647836 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.647845 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.647859 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.647868 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.750433 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.750489 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.750505 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.750530 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.750547 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.853910 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.853955 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.853966 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.853980 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.853992 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.882237 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/3.log" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.886505 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:11:52 crc kubenswrapper[4892]: E0122 09:11:52.886696 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.905480 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11c4f9d4-df36-4aac-a456-0d3daf783159\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://953a7bcb2be6cf0b411561bb80b01c12400e164946b22532c947b1a8dd98cb95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008b0161aa8db1cc2d80ec6310322d0063827de607b5786885267bb6b68ae39d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8663c92dd9ecc2ef72dac6711a33c3532de36ed4ac855d0346ebb484a9396814\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.924755 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.941615 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db7532bc181b133c3fce3bbb738dbb4d856cf0efe261977a175135e866c5ae81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.955358 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e79d60-66bf-44b6-8e7c-f8d995b5cc79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754583e28044dfcccfb14919505f1bc67c0077cbc73e8219ea5d5ac9ff75d147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b4ba98c4f3bef5c4535b29e4eaa79ec0d12d63883368ade682af671f06d8e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-882n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ntkhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.957050 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.957114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.957135 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.957161 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.957180 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:52Z","lastTransitionTime":"2026-01-22T09:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.967101 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5nnld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7391f43-09a9-4333-8df2-72d4fdc02615\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99966\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:11:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5nnld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.983538 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846c35c97db11f9b67ef2a68b1f0fbd8c4c567f246b3cfb974feb11f6f205292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:52 crc kubenswrapper[4892]: I0122 09:11:52.999879 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:52Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.016905 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.034585 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hz9vn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80ef00cc-97bb-4f08-ba72-3947ab29043f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:42Z\\\",\\\"message\\\":\\\"2026-01-22T09:10:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be\\\\n2026-01-22T09:10:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c2ebf13-8d4a-40e7-877a-ef97981620be to /host/opt/cni/bin/\\\\n2026-01-22T09:10:57Z [verbose] multus-daemon started\\\\n2026-01-22T09:10:57Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:11:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9d9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hz9vn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.046726 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8b6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78b0eb1d-db89-4f40-8f34-b35abed54117\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3669ac84679aab63695e16717d6ce853cd26e63d4d008685d746d61373a22450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dcc9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8b6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.060107 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.060147 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.060165 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.060188 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.060205 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.062664 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac3cd91-e665-45d1-abbd-2d45b4392193\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:10:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 09:10:43.877163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:10:43.878339 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3286989590/tls.crt::/tmp/serving-cert-3286989590/tls.key\\\\\\\"\\\\nI0122 09:10:49.486587 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:10:49.526995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:10:49.527025 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:10:49.527052 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:10:49.527057 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:10:49.546438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:10:49.546471 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:10:49.546512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:10:49.546515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:10:49.546518 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:10:49.546522 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:10:49.546484 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:10:49.550299 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.079779 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"142d92a9-f2e0-45be-82f5-322439d4489c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5b378f795b29dfc84b76bc5d00e62a720152c11de7706951cf16f6f28d22695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49f8d7970508541aff408a864ee28b62dfcc364c71eb9cf6a7d8bed65a048f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c26cf9e131d882402ee4d883129a1aa107469310cd81b7b4132ff07af1c56d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://025c122b5a9b05843b32891382b08a26e863fd5611e29ca3bfcfbfe10be5311d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.094950 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4765e554-3060-4876-90fe-5e054619d7a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d24de0c98259ab0c3ef7302d34c55c769640dfb351957d71e75699f7cf1033a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8xcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w87tf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.109534 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gqbrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2782a4-367a-4690-911a-06ca51331fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79e069b9560e67a7f6e2188bfbfdd2180c36c0b0956f4a1d7c7f5803abcd3603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhzbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gqbrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.128451 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73cbe765000c41861f745b197bf6b7388556b72645a26eda7d3a8194b4438014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6958c7ac98051f0dabd91509917bcdefc36c34de56c6c5a55fe540fcbeda600c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.157766 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93623e9-3eab-47bb-b94a-5b962f3eb203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:11:51Z\\\",\\\"message\\\":\\\"-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:11:51.249991 6984 factory.go:656] Stopping watch factory\\\\nI0122 09:11:51.250023 6984 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 09:11:51.250318 6984 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI0122 09:11:51.250349 6984 services_controller.go:360] Finished syncing service metrics on namespace openshift-etcd-operator for network=default : 1.323995ms\\\\nI0122 09:11:51.250381 6984 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0122 09:11:51.250391 6984 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 1.381076ms\\\\nI0122 09:11:51.250572 6984 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 09:11:51.250769 6984 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 09:11:51.250816 6984 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:11:51.250844 6984 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:11:51.250932 6984 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvw6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-whb2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.162502 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.162559 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.162578 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.162604 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.162623 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.180564 4892 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe12181-a266-4b88-b591-e1c130d15254\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cf8a3ae4a4598966cb6d9be3d22dbad427f414e81a94e33d3283e4c6d662a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57bfaf1ba272c43337c29f354c7de579a980c8c488de247b8c2efdd84172dbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be22b6fe2c4f61c65550dd533c107905b236e7a64ad67c78baebd1c940740c73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66616c74022cdef1d8bf2ae3d67b6bc8b6ab4a0ba58884a05104a0b2ef30d603\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af65fb6e0ace80dc00076fbea4422fd3b431665ab7c049c2368ae859016aee50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b96884f18fd8e46f226908328e804c7ade5cc4fb831da1a4a5e42605895c47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11568e66c19abadccdffd5953a194de24fbe1b57052b70ec492042466f8c4af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlq68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:10:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7rbdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:53Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.265234 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.265339 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.265361 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.265386 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.265442 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.368184 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.368230 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.368242 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.368256 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.368266 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.418409 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:53 crc kubenswrapper[4892]: E0122 09:11:53.418598 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.440815 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:05:14.061408206 +0000 UTC Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.471709 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.471820 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.471829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.471859 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.471902 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.574847 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.574895 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.574911 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.574933 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.574952 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.677829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.677895 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.677912 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.677937 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.677954 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.780949 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.781040 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.781070 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.781102 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.781124 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.884924 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.884996 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.885015 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.885046 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.885065 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.987905 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.987968 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.987978 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.987995 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:53 crc kubenswrapper[4892]: I0122 09:11:53.988005 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:53Z","lastTransitionTime":"2026-01-22T09:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.090962 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.091005 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.091017 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.091033 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.091043 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.193757 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.193829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.193853 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.193929 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.193958 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.296512 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.296582 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.296601 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.296626 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.296645 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.301233 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301448 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.301416745 +0000 UTC m=+148.145495838 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.301512 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.301562 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.301605 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301754 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301788 4892 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301808 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301831 4892 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301875 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.301848996 +0000 UTC m=+148.145928089 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301770 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301907 4892 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301912 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.301888887 +0000 UTC m=+148.145967990 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.301923 4892 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.302052 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.302040061 +0000 UTC m=+148.146119194 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.399037 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.399075 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.399086 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.399100 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.399110 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.402765 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.402933 4892 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.403016 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.40299389 +0000 UTC m=+148.247073053 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.417968 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.418156 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.418403 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.418468 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.418546 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.418670 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.441622 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:03:01.696216901 +0000 UTC Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.500962 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.500994 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.501004 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.501200 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.501211 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.603886 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.603914 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.603922 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.603935 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.603943 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.635080 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.635151 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.635177 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.635209 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.635236 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.656810 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:54Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.662772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.662834 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.662857 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.662884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.662902 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.685328 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:54Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.690104 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.690220 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.690243 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.690267 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.690326 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.715815 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:54Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.721111 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.721178 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.721201 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.721228 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.721249 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.739809 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:54Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.744366 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.744426 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.744443 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.744464 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.744483 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.758170 4892 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c930485-9734-4304-ad2c-ecfe6f90ae0f\\\",\\\"systemUUID\\\":\\\"61509b40-08df-4430-847e-d3a8d2681f9e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:11:54Z is after 2025-08-24T17:21:41Z" Jan 22 09:11:54 crc kubenswrapper[4892]: E0122 09:11:54.758439 4892 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.760194 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.760242 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.760258 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.760278 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.760316 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.862333 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.862377 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.862390 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.862405 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.862418 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.964076 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.964108 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.964116 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.964127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:54 crc kubenswrapper[4892]: I0122 09:11:54.964137 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:54Z","lastTransitionTime":"2026-01-22T09:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.066071 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.066112 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.066121 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.066134 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.066143 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.167693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.167723 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.167731 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.167743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.167751 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.270424 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.270479 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.270489 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.270501 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.270511 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.372988 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.373037 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.373047 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.373064 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.373075 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.418069 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:55 crc kubenswrapper[4892]: E0122 09:11:55.418557 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.442377 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:18:40.695509786 +0000 UTC Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.475983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.476027 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.476040 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.476056 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.476068 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.578192 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.578236 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.578246 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.578262 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.578274 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.681385 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.681448 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.681466 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.681492 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.681510 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.784087 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.784135 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.784145 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.784165 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.784176 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.886557 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.886612 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.886626 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.886646 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.886660 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.988628 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.988664 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.988675 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.988688 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:55 crc kubenswrapper[4892]: I0122 09:11:55.988697 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:55Z","lastTransitionTime":"2026-01-22T09:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.090940 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.090983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.090992 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.091006 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.091016 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.193658 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.193685 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.193692 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.193704 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.193713 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.296586 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.296627 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.296635 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.296649 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.296661 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.398940 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.399001 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.399021 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.399272 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.399318 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.417598 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.417609 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.417757 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:56 crc kubenswrapper[4892]: E0122 09:11:56.417881 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:56 crc kubenswrapper[4892]: E0122 09:11:56.418056 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:56 crc kubenswrapper[4892]: E0122 09:11:56.418139 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.443008 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:28:22.501597025 +0000 UTC Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.501411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.501482 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.501498 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.501520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.501542 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.604730 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.604781 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.604793 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.604811 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.604824 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.708558 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.708621 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.708647 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.708675 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.708696 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.810772 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.810854 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.810881 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.810897 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.810907 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.914087 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.914132 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.914140 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.914154 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:56 crc kubenswrapper[4892]: I0122 09:11:56.914163 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:56Z","lastTransitionTime":"2026-01-22T09:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.016494 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.016532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.016544 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.016562 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.016574 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.118530 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.118569 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.118577 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.118591 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.118602 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.221705 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.221778 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.221798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.221823 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.221842 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.324606 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.324643 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.324654 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.324670 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.324682 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.418139 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:57 crc kubenswrapper[4892]: E0122 09:11:57.418330 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.426709 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.426761 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.426777 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.426802 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.426819 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.444141 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:49:12.409686661 +0000 UTC Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.528859 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.528932 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.528955 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.528982 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.529003 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.632333 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.632386 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.632401 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.632414 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.632424 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.734996 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.735062 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.735072 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.735089 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.735100 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.837581 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.837618 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.837627 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.837639 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.837648 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.940066 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.940099 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.940109 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.940128 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:57 crc kubenswrapper[4892]: I0122 09:11:57.940138 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:57Z","lastTransitionTime":"2026-01-22T09:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.042462 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.042499 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.042510 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.042527 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.042538 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.144857 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.144890 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.144898 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.144912 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.144922 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.247157 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.247189 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.247197 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.247208 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.247219 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.349168 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.349218 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.349235 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.349256 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.349273 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.418484 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.418494 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:11:58 crc kubenswrapper[4892]: E0122 09:11:58.418661 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.418506 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:11:58 crc kubenswrapper[4892]: E0122 09:11:58.418803 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:11:58 crc kubenswrapper[4892]: E0122 09:11:58.418935 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.444253 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:31:45.41410217 +0000 UTC Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.455836 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.455870 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.455882 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.455898 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.455910 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.558485 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.558535 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.558551 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.558570 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.558587 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.661495 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.661540 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.661555 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.661573 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.661585 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.763957 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.764001 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.764013 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.764028 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.764039 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.865905 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.865938 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.865946 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.865962 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.865970 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.968000 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.968042 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.968051 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.968068 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:58 crc kubenswrapper[4892]: I0122 09:11:58.968079 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:58Z","lastTransitionTime":"2026-01-22T09:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.071025 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.071268 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.071294 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.071311 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.071321 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.172983 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.173022 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.173033 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.173052 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.173064 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.275171 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.275208 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.275217 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.275230 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.275239 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.378848 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.378891 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.378904 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.378925 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.378937 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.418025 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:11:59 crc kubenswrapper[4892]: E0122 09:11:59.418172 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.445371 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:27:21.770326638 +0000 UTC Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.481903 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.481965 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.481975 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.481993 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.482011 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.585027 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.585070 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.585079 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.585093 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.585104 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.689090 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.689149 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.689159 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.689176 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.689186 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.791762 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.791852 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.791876 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.791907 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.791930 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.895476 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.895632 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.895647 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.895672 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.895692 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.998729 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.998764 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.998778 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.998798 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:11:59 crc kubenswrapper[4892]: I0122 09:11:59.998811 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:11:59Z","lastTransitionTime":"2026-01-22T09:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.100917 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.100970 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.100979 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.101029 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.101039 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.203893 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.203935 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.203947 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.204003 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.204013 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.307163 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.307195 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.307203 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.307215 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.307225 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.409952 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.410007 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.410019 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.410037 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.410049 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.418426 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.418506 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:00 crc kubenswrapper[4892]: E0122 09:12:00.418639 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.418705 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:00 crc kubenswrapper[4892]: E0122 09:12:00.419048 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:00 crc kubenswrapper[4892]: E0122 09:12:00.419378 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.435263 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.445928 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:32:51.747863637 +0000 UTC Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.512473 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.512514 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.512526 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.512544 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.512555 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.614847 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.614891 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.614902 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.614918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.614930 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.717459 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.717506 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.717520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.717539 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.717553 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.819997 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.820061 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.820074 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.820091 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.820102 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.922794 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.922833 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.922867 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.922884 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:00 crc kubenswrapper[4892]: I0122 09:12:00.922897 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:00Z","lastTransitionTime":"2026-01-22T09:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.025599 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.025639 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.025651 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.025667 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.025679 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.127894 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.127944 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.127958 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.127974 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.127984 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.230693 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.230733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.230743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.230756 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.230767 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.332708 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.332748 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.332758 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.332773 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.332784 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.418646 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:01 crc kubenswrapper[4892]: E0122 09:12:01.418876 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.428091 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.434479 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.434503 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.434510 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.434522 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.434530 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.446502 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:03:16.063362533 +0000 UTC Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.467683 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hz9vn" podStartSLOduration=66.467663209 podStartE2EDuration="1m6.467663209s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.467300419 +0000 UTC m=+91.311379482" watchObservedRunningTime="2026-01-22 09:12:01.467663209 +0000 UTC m=+91.311742292" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.478226 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-m8b6t" podStartSLOduration=66.478211117 podStartE2EDuration="1m6.478211117s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.478179106 +0000 UTC m=+91.322258179" watchObservedRunningTime="2026-01-22 09:12:01.478211117 +0000 UTC m=+91.322290180" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.509796 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.509776058 podStartE2EDuration="35.509776058s" podCreationTimestamp="2026-01-22 09:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.509409748 +0000 UTC m=+91.353488831" watchObservedRunningTime="2026-01-22 09:12:01.509776058 +0000 UTC m=+91.353855121" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.536357 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.536400 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.536411 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.536428 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.536439 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.561547 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.561526311 podStartE2EDuration="1.561526311s" podCreationTimestamp="2026-01-22 09:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.535795623 +0000 UTC m=+91.379874706" watchObservedRunningTime="2026-01-22 09:12:01.561526311 +0000 UTC m=+91.405605384" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.575745 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podStartSLOduration=66.575728215 podStartE2EDuration="1m6.575728215s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.562049134 +0000 UTC m=+91.406128207" watchObservedRunningTime="2026-01-22 09:12:01.575728215 +0000 UTC m=+91.419807278" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.596660 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gqbrf" podStartSLOduration=66.596641705 podStartE2EDuration="1m6.596641705s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.576530766 +0000 UTC m=+91.420609829" watchObservedRunningTime="2026-01-22 09:12:01.596641705 +0000 UTC m=+91.440720768" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.616926 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.616907249 podStartE2EDuration="1m12.616907249s" podCreationTimestamp="2026-01-22 09:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.597086197 +0000 UTC m=+91.441165260" watchObservedRunningTime="2026-01-22 09:12:01.616907249 +0000 UTC m=+91.460986312" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.632692 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7rbdp" podStartSLOduration=66.632668774 podStartE2EDuration="1m6.632668774s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.632476639 +0000 UTC m=+91.476555702" watchObservedRunningTime="2026-01-22 09:12:01.632668774 +0000 UTC m=+91.476747837" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.638310 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.638359 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.638371 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.638389 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.638402 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.697169 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ntkhf" podStartSLOduration=66.697150092 podStartE2EDuration="1m6.697150092s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.681927691 +0000 UTC m=+91.526006754" watchObservedRunningTime="2026-01-22 09:12:01.697150092 +0000 UTC m=+91.541229145" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.710671 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.710652758 podStartE2EDuration="1m12.710652758s" podCreationTimestamp="2026-01-22 09:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:01.710141264 +0000 UTC m=+91.554220337" watchObservedRunningTime="2026-01-22 09:12:01.710652758 +0000 UTC m=+91.554731821" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.741043 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.741088 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.741099 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.741119 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.741129 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.843200 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.843234 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.843244 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.843257 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.843267 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.945702 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.945752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.945766 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.945783 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:01 crc kubenswrapper[4892]: I0122 09:12:01.945802 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:01Z","lastTransitionTime":"2026-01-22T09:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.048277 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.048333 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.048346 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.048361 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.048370 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.150996 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.151040 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.151052 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.151070 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.151083 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.253097 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.253127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.253139 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.253157 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.253171 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.355563 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.355604 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.355615 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.355631 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.355643 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.418159 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.418184 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:02 crc kubenswrapper[4892]: E0122 09:12:02.418361 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.418188 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:02 crc kubenswrapper[4892]: E0122 09:12:02.418442 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:02 crc kubenswrapper[4892]: E0122 09:12:02.418459 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.446668 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:25:36.392488405 +0000 UTC Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.459046 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.459097 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.459110 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.459127 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.459142 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.561461 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.561485 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.561492 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.561505 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.561512 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.663630 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.663671 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.663696 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.663720 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.663736 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.765953 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.766004 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.766020 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.766043 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.766063 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.869099 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.869159 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.869173 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.869216 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.869230 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.971738 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.971790 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.971807 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.971829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:02 crc kubenswrapper[4892]: I0122 09:12:02.971842 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:02Z","lastTransitionTime":"2026-01-22T09:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.074600 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.074635 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.074662 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.074676 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.074684 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.177570 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.177601 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.177611 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.177625 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.177634 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.279725 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.279752 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.279759 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.279771 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.279780 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.381697 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.381732 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.381743 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.381759 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.381770 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.418173 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:03 crc kubenswrapper[4892]: E0122 09:12:03.418394 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.447770 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:26:50.089162569 +0000 UTC Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.484332 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.484423 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.484440 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.484458 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.484508 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.587244 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.587303 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.587315 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.587331 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.587343 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.689842 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.689888 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.689899 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.689918 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.689932 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.793677 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.793722 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.793733 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.793750 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.793762 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.896244 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.896350 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.896373 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.896400 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.896423 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.999478 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.999519 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.999532 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.999550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:03 crc kubenswrapper[4892]: I0122 09:12:03.999563 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:03Z","lastTransitionTime":"2026-01-22T09:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.112406 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.112451 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.112467 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.112485 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.112499 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.215607 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.215762 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.215791 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.215814 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.215833 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.318080 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.318120 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.318133 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.318150 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.318163 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.418262 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.418298 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.418298 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:04 crc kubenswrapper[4892]: E0122 09:12:04.418518 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:04 crc kubenswrapper[4892]: E0122 09:12:04.418562 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:04 crc kubenswrapper[4892]: E0122 09:12:04.418393 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.419661 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.419691 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.419706 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.419719 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.419728 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.447982 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:14:22.333087639 +0000 UTC Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.522137 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.522172 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.522185 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.522201 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.522209 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.624545 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.624597 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.624606 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.624637 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.624647 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.727758 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.727816 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.727829 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.727847 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.727859 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.829540 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.829596 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.829611 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.829629 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.829640 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.932082 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.932114 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.932123 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.932136 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:04 crc kubenswrapper[4892]: I0122 09:12:04.932144 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:04Z","lastTransitionTime":"2026-01-22T09:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.034946 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.034988 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.035004 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.035019 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.035032 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:05Z","lastTransitionTime":"2026-01-22T09:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.089476 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.089520 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.089533 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.089550 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.089561 4892 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:12:05Z","lastTransitionTime":"2026-01-22T09:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.131912 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs"] Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.132236 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.133935 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.134124 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.134254 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.134477 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.144712 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.144696168 podStartE2EDuration="4.144696168s" podCreationTimestamp="2026-01-22 09:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:05.144429901 +0000 UTC m=+94.988508974" watchObservedRunningTime="2026-01-22 09:12:05.144696168 +0000 UTC m=+94.988775231" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.222501 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcef6e6-4a1f-477d-ae34-70291b959ee4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.222610 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fcef6e6-4a1f-477d-ae34-70291b959ee4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.222630 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6fcef6e6-4a1f-477d-ae34-70291b959ee4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.222650 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fcef6e6-4a1f-477d-ae34-70291b959ee4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.222668 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6fcef6e6-4a1f-477d-ae34-70291b959ee4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.323989 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fcef6e6-4a1f-477d-ae34-70291b959ee4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324040 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6fcef6e6-4a1f-477d-ae34-70291b959ee4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324062 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcef6e6-4a1f-477d-ae34-70291b959ee4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324120 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fcef6e6-4a1f-477d-ae34-70291b959ee4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324203 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6fcef6e6-4a1f-477d-ae34-70291b959ee4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324138 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6fcef6e6-4a1f-477d-ae34-70291b959ee4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324277 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6fcef6e6-4a1f-477d-ae34-70291b959ee4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.324917 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fcef6e6-4a1f-477d-ae34-70291b959ee4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.329763 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fcef6e6-4a1f-477d-ae34-70291b959ee4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.339221 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fcef6e6-4a1f-477d-ae34-70291b959ee4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vsnqs\" (UID: \"6fcef6e6-4a1f-477d-ae34-70291b959ee4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.418620 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.418692 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:05 crc kubenswrapper[4892]: E0122 09:12:05.418772 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:12:05 crc kubenswrapper[4892]: E0122 09:12:05.418925 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.446735 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.448415 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:02:54.020768047 +0000 UTC Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.448438 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.457083 4892 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.926328 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" event={"ID":"6fcef6e6-4a1f-477d-ae34-70291b959ee4","Type":"ContainerStarted","Data":"d6af0abc61f0ac28622b13508596c7df54c9196b4e48a181a3882f9d97d30eb5"} Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.926416 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" event={"ID":"6fcef6e6-4a1f-477d-ae34-70291b959ee4","Type":"ContainerStarted","Data":"65586cde7201ea537f712ad0389a871d750be691f3256a3389abbb7c4860fe32"} Jan 22 09:12:05 crc kubenswrapper[4892]: I0122 09:12:05.938762 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vsnqs" podStartSLOduration=70.938745498 podStartE2EDuration="1m10.938745498s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:05.938620645 +0000 UTC m=+95.782699708" watchObservedRunningTime="2026-01-22 09:12:05.938745498 +0000 UTC m=+95.782824561" Jan 22 09:12:06 crc kubenswrapper[4892]: I0122 09:12:06.418500 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:06 crc kubenswrapper[4892]: I0122 09:12:06.418580 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:06 crc kubenswrapper[4892]: I0122 09:12:06.418586 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:06 crc kubenswrapper[4892]: E0122 09:12:06.419015 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:06 crc kubenswrapper[4892]: E0122 09:12:06.419073 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:06 crc kubenswrapper[4892]: E0122 09:12:06.418884 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:07 crc kubenswrapper[4892]: I0122 09:12:07.418244 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:07 crc kubenswrapper[4892]: E0122 09:12:07.418408 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:08 crc kubenswrapper[4892]: I0122 09:12:08.417606 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:08 crc kubenswrapper[4892]: I0122 09:12:08.417686 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:08 crc kubenswrapper[4892]: I0122 09:12:08.417699 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:08 crc kubenswrapper[4892]: E0122 09:12:08.417771 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:08 crc kubenswrapper[4892]: E0122 09:12:08.417902 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:08 crc kubenswrapper[4892]: E0122 09:12:08.418056 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:09 crc kubenswrapper[4892]: I0122 09:12:09.418456 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:09 crc kubenswrapper[4892]: E0122 09:12:09.418642 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:10 crc kubenswrapper[4892]: I0122 09:12:10.417876 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:10 crc kubenswrapper[4892]: I0122 09:12:10.417978 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:10 crc kubenswrapper[4892]: I0122 09:12:10.418113 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:10 crc kubenswrapper[4892]: E0122 09:12:10.418228 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:10 crc kubenswrapper[4892]: E0122 09:12:10.418378 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:10 crc kubenswrapper[4892]: E0122 09:12:10.418455 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:11 crc kubenswrapper[4892]: I0122 09:12:11.418205 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:11 crc kubenswrapper[4892]: E0122 09:12:11.419551 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:12 crc kubenswrapper[4892]: I0122 09:12:12.417756 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:12 crc kubenswrapper[4892]: I0122 09:12:12.417850 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:12 crc kubenswrapper[4892]: E0122 09:12:12.417867 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:12 crc kubenswrapper[4892]: E0122 09:12:12.417980 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:12 crc kubenswrapper[4892]: I0122 09:12:12.418028 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:12 crc kubenswrapper[4892]: E0122 09:12:12.418088 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:13 crc kubenswrapper[4892]: I0122 09:12:13.417851 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:13 crc kubenswrapper[4892]: E0122 09:12:13.418148 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:13 crc kubenswrapper[4892]: I0122 09:12:13.608411 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:13 crc kubenswrapper[4892]: E0122 09:12:13.608590 4892 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:12:13 crc kubenswrapper[4892]: E0122 09:12:13.608701 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs podName:f7391f43-09a9-4333-8df2-72d4fdc02615 nodeName:}" failed. No retries permitted until 2026-01-22 09:13:17.608674486 +0000 UTC m=+167.452753579 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs") pod "network-metrics-daemon-5nnld" (UID: "f7391f43-09a9-4333-8df2-72d4fdc02615") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:12:14 crc kubenswrapper[4892]: I0122 09:12:14.418520 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:14 crc kubenswrapper[4892]: I0122 09:12:14.418572 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:14 crc kubenswrapper[4892]: I0122 09:12:14.418520 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:14 crc kubenswrapper[4892]: E0122 09:12:14.418650 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:14 crc kubenswrapper[4892]: E0122 09:12:14.418776 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:14 crc kubenswrapper[4892]: E0122 09:12:14.418873 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:15 crc kubenswrapper[4892]: I0122 09:12:15.418585 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:15 crc kubenswrapper[4892]: E0122 09:12:15.418774 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:16 crc kubenswrapper[4892]: I0122 09:12:16.418131 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:16 crc kubenswrapper[4892]: I0122 09:12:16.418223 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:16 crc kubenswrapper[4892]: I0122 09:12:16.418267 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:16 crc kubenswrapper[4892]: E0122 09:12:16.418325 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:16 crc kubenswrapper[4892]: E0122 09:12:16.418756 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:16 crc kubenswrapper[4892]: E0122 09:12:16.418818 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:17 crc kubenswrapper[4892]: I0122 09:12:17.418401 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:17 crc kubenswrapper[4892]: E0122 09:12:17.418683 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:18 crc kubenswrapper[4892]: I0122 09:12:18.418154 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:18 crc kubenswrapper[4892]: I0122 09:12:18.418377 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:18 crc kubenswrapper[4892]: I0122 09:12:18.418487 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:18 crc kubenswrapper[4892]: E0122 09:12:18.418557 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:18 crc kubenswrapper[4892]: E0122 09:12:18.418602 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:18 crc kubenswrapper[4892]: E0122 09:12:18.418741 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:19 crc kubenswrapper[4892]: I0122 09:12:19.418038 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:19 crc kubenswrapper[4892]: E0122 09:12:19.418217 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:19 crc kubenswrapper[4892]: I0122 09:12:19.419013 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:12:19 crc kubenswrapper[4892]: E0122 09:12:19.419256 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-whb2h_openshift-ovn-kubernetes(a93623e9-3eab-47bb-b94a-5b962f3eb203)\"" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" Jan 22 09:12:20 crc kubenswrapper[4892]: I0122 09:12:20.417771 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:20 crc kubenswrapper[4892]: I0122 09:12:20.417811 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:20 crc kubenswrapper[4892]: I0122 09:12:20.417850 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:20 crc kubenswrapper[4892]: E0122 09:12:20.417925 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:20 crc kubenswrapper[4892]: E0122 09:12:20.418014 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:20 crc kubenswrapper[4892]: E0122 09:12:20.418088 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:21 crc kubenswrapper[4892]: I0122 09:12:21.418469 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:21 crc kubenswrapper[4892]: E0122 09:12:21.419421 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:22 crc kubenswrapper[4892]: I0122 09:12:22.417826 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:22 crc kubenswrapper[4892]: I0122 09:12:22.417846 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:22 crc kubenswrapper[4892]: E0122 09:12:22.418345 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:22 crc kubenswrapper[4892]: E0122 09:12:22.418365 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:22 crc kubenswrapper[4892]: I0122 09:12:22.417906 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:22 crc kubenswrapper[4892]: E0122 09:12:22.418428 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:23 crc kubenswrapper[4892]: I0122 09:12:23.417725 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:23 crc kubenswrapper[4892]: E0122 09:12:23.417937 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:24 crc kubenswrapper[4892]: I0122 09:12:24.418531 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:24 crc kubenswrapper[4892]: I0122 09:12:24.418531 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:24 crc kubenswrapper[4892]: E0122 09:12:24.418751 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:24 crc kubenswrapper[4892]: I0122 09:12:24.418554 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:24 crc kubenswrapper[4892]: E0122 09:12:24.418858 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:24 crc kubenswrapper[4892]: E0122 09:12:24.419023 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:25 crc kubenswrapper[4892]: I0122 09:12:25.418685 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:25 crc kubenswrapper[4892]: E0122 09:12:25.418850 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:26 crc kubenswrapper[4892]: I0122 09:12:26.418334 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:26 crc kubenswrapper[4892]: I0122 09:12:26.418348 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:26 crc kubenswrapper[4892]: E0122 09:12:26.418690 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:26 crc kubenswrapper[4892]: E0122 09:12:26.418804 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:26 crc kubenswrapper[4892]: I0122 09:12:26.418830 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:26 crc kubenswrapper[4892]: E0122 09:12:26.419586 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:27 crc kubenswrapper[4892]: I0122 09:12:27.418614 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:27 crc kubenswrapper[4892]: E0122 09:12:27.418835 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.418230 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.418331 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:28 crc kubenswrapper[4892]: E0122 09:12:28.418382 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.418239 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:28 crc kubenswrapper[4892]: E0122 09:12:28.418493 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:28 crc kubenswrapper[4892]: E0122 09:12:28.418542 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.996130 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/1.log" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.996598 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/0.log" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.996631 4892 generic.go:334] "Generic (PLEG): container finished" podID="80ef00cc-97bb-4f08-ba72-3947ab29043f" containerID="d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde" exitCode=1 Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.996666 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerDied","Data":"d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde"} Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.996713 4892 scope.go:117] "RemoveContainer" containerID="e57ecb5ef670301b26c5a3734615f733e0aeb797c31387b4d48c688bb1b3631b" Jan 22 09:12:28 crc kubenswrapper[4892]: I0122 09:12:28.997191 4892 scope.go:117] "RemoveContainer" containerID="d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde" Jan 22 09:12:28 crc kubenswrapper[4892]: E0122 09:12:28.997391 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hz9vn_openshift-multus(80ef00cc-97bb-4f08-ba72-3947ab29043f)\"" pod="openshift-multus/multus-hz9vn" podUID="80ef00cc-97bb-4f08-ba72-3947ab29043f" Jan 22 09:12:29 crc kubenswrapper[4892]: I0122 09:12:29.418118 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:29 crc kubenswrapper[4892]: E0122 09:12:29.418310 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:30 crc kubenswrapper[4892]: I0122 09:12:30.001362 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/1.log" Jan 22 09:12:30 crc kubenswrapper[4892]: I0122 09:12:30.418147 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:30 crc kubenswrapper[4892]: I0122 09:12:30.418228 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:30 crc kubenswrapper[4892]: I0122 09:12:30.418179 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:30 crc kubenswrapper[4892]: E0122 09:12:30.418343 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:30 crc kubenswrapper[4892]: E0122 09:12:30.418725 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:30 crc kubenswrapper[4892]: E0122 09:12:30.418615 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:31 crc kubenswrapper[4892]: E0122 09:12:31.395247 4892 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 09:12:31 crc kubenswrapper[4892]: I0122 09:12:31.417963 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:31 crc kubenswrapper[4892]: E0122 09:12:31.418902 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:31 crc kubenswrapper[4892]: E0122 09:12:31.500583 4892 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:12:32 crc kubenswrapper[4892]: I0122 09:12:32.418489 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:32 crc kubenswrapper[4892]: I0122 09:12:32.418496 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:32 crc kubenswrapper[4892]: I0122 09:12:32.418599 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:32 crc kubenswrapper[4892]: E0122 09:12:32.418720 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:32 crc kubenswrapper[4892]: E0122 09:12:32.418820 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:32 crc kubenswrapper[4892]: E0122 09:12:32.419122 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:32 crc kubenswrapper[4892]: I0122 09:12:32.419374 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:12:33 crc kubenswrapper[4892]: I0122 09:12:33.012911 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/3.log" Jan 22 09:12:33 crc kubenswrapper[4892]: I0122 09:12:33.015895 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerStarted","Data":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} Jan 22 09:12:33 crc kubenswrapper[4892]: I0122 09:12:33.016407 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:12:33 crc kubenswrapper[4892]: I0122 09:12:33.052909 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podStartSLOduration=98.052891885 podStartE2EDuration="1m38.052891885s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:33.052057583 +0000 UTC m=+122.896136656" watchObservedRunningTime="2026-01-22 09:12:33.052891885 +0000 UTC m=+122.896970948" Jan 22 09:12:33 crc kubenswrapper[4892]: I0122 09:12:33.141326 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5nnld"] Jan 22 09:12:33 crc kubenswrapper[4892]: I0122 09:12:33.141486 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:33 crc kubenswrapper[4892]: E0122 09:12:33.141615 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:34 crc kubenswrapper[4892]: I0122 09:12:34.418195 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:34 crc kubenswrapper[4892]: I0122 09:12:34.418267 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:34 crc kubenswrapper[4892]: I0122 09:12:34.418225 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:34 crc kubenswrapper[4892]: E0122 09:12:34.418374 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:34 crc kubenswrapper[4892]: E0122 09:12:34.418532 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:34 crc kubenswrapper[4892]: E0122 09:12:34.418591 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:35 crc kubenswrapper[4892]: I0122 09:12:35.417666 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:35 crc kubenswrapper[4892]: E0122 09:12:35.417806 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:36 crc kubenswrapper[4892]: I0122 09:12:36.418334 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:36 crc kubenswrapper[4892]: I0122 09:12:36.418391 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:36 crc kubenswrapper[4892]: I0122 09:12:36.418349 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:36 crc kubenswrapper[4892]: E0122 09:12:36.418463 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:36 crc kubenswrapper[4892]: E0122 09:12:36.418520 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:36 crc kubenswrapper[4892]: E0122 09:12:36.418581 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:36 crc kubenswrapper[4892]: E0122 09:12:36.501649 4892 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:12:37 crc kubenswrapper[4892]: I0122 09:12:37.418277 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:37 crc kubenswrapper[4892]: E0122 09:12:37.418668 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:38 crc kubenswrapper[4892]: I0122 09:12:38.418658 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:38 crc kubenswrapper[4892]: I0122 09:12:38.418664 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:38 crc kubenswrapper[4892]: E0122 09:12:38.418895 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:38 crc kubenswrapper[4892]: E0122 09:12:38.419013 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:38 crc kubenswrapper[4892]: I0122 09:12:38.418696 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:38 crc kubenswrapper[4892]: E0122 09:12:38.419126 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:39 crc kubenswrapper[4892]: I0122 09:12:39.418278 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:39 crc kubenswrapper[4892]: E0122 09:12:39.418548 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:40 crc kubenswrapper[4892]: I0122 09:12:40.418486 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:40 crc kubenswrapper[4892]: I0122 09:12:40.418527 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:40 crc kubenswrapper[4892]: I0122 09:12:40.418601 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:40 crc kubenswrapper[4892]: E0122 09:12:40.418682 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:40 crc kubenswrapper[4892]: E0122 09:12:40.418846 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:40 crc kubenswrapper[4892]: E0122 09:12:40.418989 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:41 crc kubenswrapper[4892]: I0122 09:12:41.417975 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:41 crc kubenswrapper[4892]: E0122 09:12:41.418760 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:41 crc kubenswrapper[4892]: E0122 09:12:41.502314 4892 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:12:42 crc kubenswrapper[4892]: I0122 09:12:42.418225 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:42 crc kubenswrapper[4892]: I0122 09:12:42.418325 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:42 crc kubenswrapper[4892]: E0122 09:12:42.418490 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:42 crc kubenswrapper[4892]: I0122 09:12:42.418581 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:42 crc kubenswrapper[4892]: E0122 09:12:42.418718 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:42 crc kubenswrapper[4892]: I0122 09:12:42.418784 4892 scope.go:117] "RemoveContainer" containerID="d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde" Jan 22 09:12:42 crc kubenswrapper[4892]: E0122 09:12:42.418911 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:43 crc kubenswrapper[4892]: I0122 09:12:43.047771 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/1.log" Jan 22 09:12:43 crc kubenswrapper[4892]: I0122 09:12:43.047841 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerStarted","Data":"497bfee3be201ad7f5a2f636b9a63fec67e338fd03270d1e48260b051c0ddd34"} Jan 22 09:12:43 crc kubenswrapper[4892]: I0122 09:12:43.417830 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:43 crc kubenswrapper[4892]: E0122 09:12:43.417974 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:44 crc kubenswrapper[4892]: I0122 09:12:44.417609 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:44 crc kubenswrapper[4892]: I0122 09:12:44.417691 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:44 crc kubenswrapper[4892]: I0122 09:12:44.417618 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:44 crc kubenswrapper[4892]: E0122 09:12:44.417800 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:44 crc kubenswrapper[4892]: E0122 09:12:44.417893 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:44 crc kubenswrapper[4892]: E0122 09:12:44.418000 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:45 crc kubenswrapper[4892]: I0122 09:12:45.417668 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:45 crc kubenswrapper[4892]: E0122 09:12:45.417895 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5nnld" podUID="f7391f43-09a9-4333-8df2-72d4fdc02615" Jan 22 09:12:46 crc kubenswrapper[4892]: I0122 09:12:46.417782 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:46 crc kubenswrapper[4892]: I0122 09:12:46.417809 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:46 crc kubenswrapper[4892]: I0122 09:12:46.417917 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:46 crc kubenswrapper[4892]: E0122 09:12:46.418116 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:12:46 crc kubenswrapper[4892]: E0122 09:12:46.418304 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:12:46 crc kubenswrapper[4892]: E0122 09:12:46.418502 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:12:47 crc kubenswrapper[4892]: I0122 09:12:47.418521 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:12:47 crc kubenswrapper[4892]: I0122 09:12:47.421554 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 09:12:47 crc kubenswrapper[4892]: I0122 09:12:47.422961 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.418005 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.418078 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.418150 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.420405 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.420423 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.420455 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 09:12:48 crc kubenswrapper[4892]: I0122 09:12:48.420576 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.185100 4892 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.231334 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtrx4"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.232097 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.232433 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mhgmk"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.232876 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.233311 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.233608 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.234716 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.235360 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.235840 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.236092 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.236528 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.236956 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.237680 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.237950 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.242876 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243196 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243343 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243379 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243605 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243710 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243795 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243902 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.243352 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.244138 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.244165 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.244167 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.244206 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j4vqv"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.244623 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.245451 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.245570 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.245686 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.245707 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.245762 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.247496 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.247730 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.247883 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.248010 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.248025 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.248124 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.248352 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-82b7j"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.248852 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.249488 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4z8st"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.249786 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.250174 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s9gdc"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.250604 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.251393 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56s7c"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.251665 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.253451 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-95xtn"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.253714 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.254075 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.254479 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zgz9g"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.263390 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.269505 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.277670 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.290180 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.291218 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-st5q7"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.291503 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.291783 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.292036 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.292224 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.293016 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.293146 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.293271 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.293483 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.293639 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-86wr5"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294069 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294174 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294346 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294358 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294436 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294645 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294671 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294824 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294876 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294915 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.294919 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295129 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295153 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295174 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295236 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295329 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295419 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.295804 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.298002 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.298340 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.298428 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.298531 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.298441 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.298916 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299150 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299249 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299350 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299418 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299431 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299488 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299522 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299590 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299665 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299740 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299813 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299888 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299934 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.299961 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300029 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300094 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300161 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300272 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300301 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wfs96"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300420 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300531 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300629 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300714 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.300725 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.301444 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.301882 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303595 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303666 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303796 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303909 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303953 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303989 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.303609 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304057 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304093 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304177 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304313 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304338 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304180 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.304435 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.305114 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.305908 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.306068 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.306184 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.306271 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.306461 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.306735 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.307138 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.307748 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.308109 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.308590 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.308779 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.309094 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.309248 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.309372 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311025 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311356 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311451 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311792 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311808 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311851 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311954 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311966 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cdmgc"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.311954 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.313267 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314000 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314169 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314266 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314364 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314446 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314536 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.314630 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.329515 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.331113 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.331121 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-bknqb"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.331815 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.344335 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.345578 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.346030 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.346912 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vbm7b"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.347428 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.347504 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.347743 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-smrbs"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.347943 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.348087 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.348478 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.349572 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.353799 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.356065 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mhgmk"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.356692 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.357501 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.357910 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.357980 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.358487 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.362748 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtrx4"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.365621 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.366136 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.366219 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.367645 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j4vqv"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.369036 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-82b7j"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.370848 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56s7c"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.373033 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-95xtn"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.376515 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.377100 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.378127 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s9gdc"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.379209 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4z8st"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.380672 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.381564 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.382588 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zgz9g"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.383743 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.384770 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4knsq"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.385327 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.385733 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wfs96"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.386704 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.387638 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-86wr5"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.388573 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.389513 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.390479 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.391605 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395345 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395392 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-service-ca\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395439 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395477 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed66930a-e393-47ea-a98d-907a1327edac-node-pullsecrets\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395706 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a357ae9-5621-4063-b475-508269240d98-audit-dir\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395747 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-client\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395776 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6j95\" (UniqueName: \"kubernetes.io/projected/5f7e5be0-b733-467d-afe0-35af7555688b-kube-api-access-j6j95\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395815 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-encryption-config\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395844 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395879 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395899 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.395913 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-serving-cert\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396125 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396165 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fm7c\" (UniqueName: \"kubernetes.io/projected/ab72073f-69cb-4719-b896-54618a6925db-kube-api-access-7fm7c\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396189 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-encryption-config\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396246 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396673 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396784 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396815 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f7e5be0-b733-467d-afe0-35af7555688b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396843 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-audit-policies\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396877 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npw47\" (UniqueName: \"kubernetes.io/projected/6a6e5907-552b-4dc7-884f-d766a773e8b0-kube-api-access-npw47\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.396951 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f94488-4261-4a70-ab65-e85c42ba3313-config\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397000 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2895e12-d7e7-4eb4-8455-cae19e2347c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397115 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrptt\" (UniqueName: \"kubernetes.io/projected/5fee1fa7-f83e-4be4-88f0-ed57f5f1d051-kube-api-access-wrptt\") pod \"dns-operator-744455d44c-82b7j\" (UID: \"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051\") " pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397224 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkhz\" (UniqueName: \"kubernetes.io/projected/09f94488-4261-4a70-ab65-e85c42ba3313-kube-api-access-vdkhz\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397272 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397391 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-ca\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397430 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397469 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthkj\" (UniqueName: \"kubernetes.io/projected/c2b2a373-92d3-4af2-94e3-e4611dbd7785-kube-api-access-tthkj\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397507 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-etcd-client\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397540 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-config\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397579 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-service-ca\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397647 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397680 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-service-ca-bundle\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397737 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397813 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-audit-policies\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397851 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397948 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4855g\" (UniqueName: \"kubernetes.io/projected/beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6-kube-api-access-4855g\") pod \"cluster-samples-operator-665b6dd947-5qlbn\" (UID: \"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.397992 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d20382c6-63e0-44c4-994b-952a489ece50-config\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398025 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5qlbn\" (UID: \"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398064 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-trusted-ca-bundle\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398125 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398161 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rwv\" (UniqueName: \"kubernetes.io/projected/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-kube-api-access-k9rwv\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398232 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-serving-cert\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398268 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398311 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-serving-cert\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398352 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5af5cc-4a80-4fe0-9c4a-498408cdc453-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ctztv\" (UID: \"fc5af5cc-4a80-4fe0-9c4a-498408cdc453\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398391 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4qt\" (UniqueName: \"kubernetes.io/projected/ed66930a-e393-47ea-a98d-907a1327edac-kube-api-access-gn4qt\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398428 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67kf\" (UniqueName: \"kubernetes.io/projected/3a357ae9-5621-4063-b475-508269240d98-kube-api-access-t67kf\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398610 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-console-config\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398664 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-etcd-client\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398899 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-oauth-serving-cert\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398935 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-oauth-config\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.398972 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f7e5be0-b733-467d-afe0-35af7555688b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399000 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/09f94488-4261-4a70-ab65-e85c42ba3313-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399063 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-etcd-serving-ca\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399102 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-client-ca\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399132 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-st5q7"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399141 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399190 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-config\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399215 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2t4\" (UniqueName: \"kubernetes.io/projected/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-kube-api-access-lg2t4\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399298 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c2b2a373-92d3-4af2-94e3-e4611dbd7785-machine-approver-tls\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399357 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d20382c6-63e0-44c4-994b-952a489ece50-trusted-ca\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399396 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399432 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91f44ce-a5d5-4379-a443-c61626f142f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399520 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77aaec88-130d-43b9-9828-24098fc3748d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399645 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8d4a47e-b68c-428e-9e69-11b1040dd23e-serving-cert\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399764 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-serving-cert\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399850 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-serving-cert\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.399939 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fee1fa7-f83e-4be4-88f0-ed57f5f1d051-metrics-tls\") pod \"dns-operator-744455d44c-82b7j\" (UID: \"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051\") " pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400042 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-client-ca\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400086 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77aaec88-130d-43b9-9828-24098fc3748d-metrics-tls\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400179 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d20382c6-63e0-44c4-994b-952a489ece50-serving-cert\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400222 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59xg\" (UniqueName: \"kubernetes.io/projected/fa7d6587-5137-4b9b-accb-3b4800c1bce6-kube-api-access-h59xg\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400260 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzhm\" (UniqueName: \"kubernetes.io/projected/fc5af5cc-4a80-4fe0-9c4a-498408cdc453-kube-api-access-skzhm\") pod \"package-server-manager-789f6589d5-ctztv\" (UID: \"fc5af5cc-4a80-4fe0-9c4a-498408cdc453\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400356 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400380 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-config\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400405 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2895e12-d7e7-4eb4-8455-cae19e2347c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400441 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2b2a373-92d3-4af2-94e3-e4611dbd7785-auth-proxy-config\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400462 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6khz\" (UniqueName: \"kubernetes.io/projected/d20382c6-63e0-44c4-994b-952a489ece50-kube-api-access-f6khz\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400489 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsd6n\" (UniqueName: \"kubernetes.io/projected/a91f44ce-a5d5-4379-a443-c61626f142f7-kube-api-access-rsd6n\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400510 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400533 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsdz\" (UniqueName: \"kubernetes.io/projected/f2895e12-d7e7-4eb4-8455-cae19e2347c2-kube-api-access-ggsdz\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400554 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77aaec88-130d-43b9-9828-24098fc3748d-trusted-ca\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400594 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-image-import-ca\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400611 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed66930a-e393-47ea-a98d-907a1327edac-audit-dir\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400639 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b2a373-92d3-4af2-94e3-e4611dbd7785-config\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400662 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-config\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400683 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400699 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwj7j\" (UniqueName: \"kubernetes.io/projected/77aaec88-130d-43b9-9828-24098fc3748d-kube-api-access-vwj7j\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400720 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6e5907-552b-4dc7-884f-d766a773e8b0-audit-dir\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400744 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9r2s\" (UniqueName: \"kubernetes.io/projected/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-kube-api-access-v9r2s\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400779 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09f94488-4261-4a70-ab65-e85c42ba3313-images\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400853 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400873 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-config\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400914 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-audit\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.400947 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcc7m\" (UniqueName: \"kubernetes.io/projected/c8d4a47e-b68c-428e-9e69-11b1040dd23e-kube-api-access-gcc7m\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.402363 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.404899 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.405438 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.407557 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.408523 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vbm7b"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.409506 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.410587 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-smrbs"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.411755 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.412885 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.414057 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.414617 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cdmgc"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.416221 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hk2bk"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.416957 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.417507 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rhbtn"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.419023 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hk2bk"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.419786 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rhbtn"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.420418 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.433750 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.435860 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-x78kb"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.436547 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.443918 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x78kb"] Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.454224 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.474887 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.494358 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.501959 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npw47\" (UniqueName: \"kubernetes.io/projected/6a6e5907-552b-4dc7-884f-d766a773e8b0-kube-api-access-npw47\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502012 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f94488-4261-4a70-ab65-e85c42ba3313-config\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502034 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2895e12-d7e7-4eb4-8455-cae19e2347c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502064 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrptt\" (UniqueName: \"kubernetes.io/projected/5fee1fa7-f83e-4be4-88f0-ed57f5f1d051-kube-api-access-wrptt\") pod \"dns-operator-744455d44c-82b7j\" (UID: \"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051\") " pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502079 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502094 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdkhz\" (UniqueName: \"kubernetes.io/projected/09f94488-4261-4a70-ab65-e85c42ba3313-kube-api-access-vdkhz\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502112 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502129 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-ca\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502145 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthkj\" (UniqueName: \"kubernetes.io/projected/c2b2a373-92d3-4af2-94e3-e4611dbd7785-kube-api-access-tthkj\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502158 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-etcd-client\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502177 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-config\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502195 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-service-ca\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502213 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502228 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-service-ca-bundle\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502244 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502264 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-config\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502307 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-audit-policies\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502331 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502357 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4855g\" (UniqueName: \"kubernetes.io/projected/beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6-kube-api-access-4855g\") pod \"cluster-samples-operator-665b6dd947-5qlbn\" (UID: \"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502423 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d20382c6-63e0-44c4-994b-952a489ece50-config\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502447 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5qlbn\" (UID: \"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502463 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-trusted-ca-bundle\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502481 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502496 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rwv\" (UniqueName: \"kubernetes.io/projected/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-kube-api-access-k9rwv\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502513 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-serving-cert\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502528 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502544 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-serving-cert\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502562 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5af5cc-4a80-4fe0-9c4a-498408cdc453-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ctztv\" (UID: \"fc5af5cc-4a80-4fe0-9c4a-498408cdc453\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502579 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-console-config\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502598 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4qt\" (UniqueName: \"kubernetes.io/projected/ed66930a-e393-47ea-a98d-907a1327edac-kube-api-access-gn4qt\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502616 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67kf\" (UniqueName: \"kubernetes.io/projected/3a357ae9-5621-4063-b475-508269240d98-kube-api-access-t67kf\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502633 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502651 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62jw\" (UniqueName: \"kubernetes.io/projected/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-kube-api-access-w62jw\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502668 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ffec72-092a-4145-9136-d05df9fab68a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502719 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23125b22-0965-46a8-a698-dc256f032b3c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-86wr5\" (UID: \"23125b22-0965-46a8-a698-dc256f032b3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502736 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-etcd-client\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502753 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-oauth-config\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502787 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-oauth-serving-cert\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502805 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-etcd-serving-ca\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502810 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502821 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f7e5be0-b733-467d-afe0-35af7555688b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502871 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/09f94488-4261-4a70-ab65-e85c42ba3313-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502898 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-client-ca\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502918 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502936 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-config\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502936 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f94488-4261-4a70-ab65-e85c42ba3313-config\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502953 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg2t4\" (UniqueName: \"kubernetes.io/projected/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-kube-api-access-lg2t4\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502973 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c2b2a373-92d3-4af2-94e3-e4611dbd7785-machine-approver-tls\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502992 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d20382c6-63e0-44c4-994b-952a489ece50-trusted-ca\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.502991 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503014 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503056 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91f44ce-a5d5-4379-a443-c61626f142f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503082 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77aaec88-130d-43b9-9828-24098fc3748d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503103 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8d4a47e-b68c-428e-9e69-11b1040dd23e-serving-cert\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503133 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48ffec72-092a-4145-9136-d05df9fab68a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503163 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-serving-cert\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-serving-cert\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503200 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fee1fa7-f83e-4be4-88f0-ed57f5f1d051-metrics-tls\") pod \"dns-operator-744455d44c-82b7j\" (UID: \"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051\") " pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503218 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-node-bootstrap-token\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503257 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-client-ca\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503275 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77aaec88-130d-43b9-9828-24098fc3748d-metrics-tls\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503313 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d20382c6-63e0-44c4-994b-952a489ece50-serving-cert\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503331 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59xg\" (UniqueName: \"kubernetes.io/projected/fa7d6587-5137-4b9b-accb-3b4800c1bce6-kube-api-access-h59xg\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503356 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skzhm\" (UniqueName: \"kubernetes.io/projected/fc5af5cc-4a80-4fe0-9c4a-498408cdc453-kube-api-access-skzhm\") pod \"package-server-manager-789f6589d5-ctztv\" (UID: \"fc5af5cc-4a80-4fe0-9c4a-498408cdc453\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503378 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503406 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2b2a373-92d3-4af2-94e3-e4611dbd7785-auth-proxy-config\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503429 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6khz\" (UniqueName: \"kubernetes.io/projected/d20382c6-63e0-44c4-994b-952a489ece50-kube-api-access-f6khz\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503468 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-config\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503488 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2895e12-d7e7-4eb4-8455-cae19e2347c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503506 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-srv-cert\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503531 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-image-import-ca\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503548 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsd6n\" (UniqueName: \"kubernetes.io/projected/a91f44ce-a5d5-4379-a443-c61626f142f7-kube-api-access-rsd6n\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503567 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503584 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsdz\" (UniqueName: \"kubernetes.io/projected/f2895e12-d7e7-4eb4-8455-cae19e2347c2-kube-api-access-ggsdz\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503601 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77aaec88-130d-43b9-9828-24098fc3748d-trusted-ca\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503618 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-certs\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503622 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-client-ca\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503635 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vzc\" (UniqueName: \"kubernetes.io/projected/23125b22-0965-46a8-a698-dc256f032b3c-kube-api-access-v4vzc\") pod \"multus-admission-controller-857f4d67dd-86wr5\" (UID: \"23125b22-0965-46a8-a698-dc256f032b3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503663 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed66930a-e393-47ea-a98d-907a1327edac-audit-dir\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503688 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503712 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503739 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b2a373-92d3-4af2-94e3-e4611dbd7785-config\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503757 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-config\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503773 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503789 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwj7j\" (UniqueName: \"kubernetes.io/projected/77aaec88-130d-43b9-9828-24098fc3748d-kube-api-access-vwj7j\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503807 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48ffec72-092a-4145-9136-d05df9fab68a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503825 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-audit\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503841 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6e5907-552b-4dc7-884f-d766a773e8b0-audit-dir\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503859 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9r2s\" (UniqueName: \"kubernetes.io/projected/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-kube-api-access-v9r2s\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503879 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09f94488-4261-4a70-ab65-e85c42ba3313-images\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503896 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503912 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-config\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503927 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcc7m\" (UniqueName: \"kubernetes.io/projected/c8d4a47e-b68c-428e-9e69-11b1040dd23e-kube-api-access-gcc7m\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503945 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503959 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503975 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-service-ca\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503998 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed66930a-e393-47ea-a98d-907a1327edac-node-pullsecrets\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504015 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a357ae9-5621-4063-b475-508269240d98-audit-dir\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504033 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6j95\" (UniqueName: \"kubernetes.io/projected/5f7e5be0-b733-467d-afe0-35af7555688b-kube-api-access-j6j95\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504049 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-encryption-config\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504064 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-client\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504082 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504098 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504114 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504147 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-serving-cert\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504170 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fm7c\" (UniqueName: \"kubernetes.io/projected/ab72073f-69cb-4719-b896-54618a6925db-kube-api-access-7fm7c\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504186 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-encryption-config\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504202 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504217 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504236 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f7e5be0-b733-467d-afe0-35af7555688b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504251 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-audit-policies\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504270 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8n9\" (UniqueName: \"kubernetes.io/projected/93fe0c31-5f71-4e0f-8325-3b246885136e-kube-api-access-6c8n9\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.507930 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed66930a-e393-47ea-a98d-907a1327edac-audit-dir\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503444 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f7e5be0-b733-467d-afe0-35af7555688b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.508231 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a357ae9-5621-4063-b475-508269240d98-audit-dir\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.503971 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.504250 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-ca\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.508627 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6e5907-552b-4dc7-884f-d766a773e8b0-audit-dir\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.509022 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed66930a-e393-47ea-a98d-907a1327edac-node-pullsecrets\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.509383 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09f94488-4261-4a70-ab65-e85c42ba3313-images\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.509552 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-service-ca\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.509578 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.510252 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-etcd-client\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.510480 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91f44ce-a5d5-4379-a443-c61626f142f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.510529 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/09f94488-4261-4a70-ab65-e85c42ba3313-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.511134 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-client-ca\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.511542 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-config\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.511755 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-serving-cert\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.511775 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-audit-policies\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.511986 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-config\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.512824 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-config\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.512957 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-config\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.512959 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-etcd-serving-ca\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.513538 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77aaec88-130d-43b9-9828-24098fc3748d-trusted-ca\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.513565 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.513838 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-trusted-ca-bundle\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.514040 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6e5907-552b-4dc7-884f-d766a773e8b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.514269 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d20382c6-63e0-44c4-994b-952a489ece50-config\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.514426 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.514583 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.514618 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2b2a373-92d3-4af2-94e3-e4611dbd7785-auth-proxy-config\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-config\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515046 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2895e12-d7e7-4eb4-8455-cae19e2347c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515400 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8d4a47e-b68c-428e-9e69-11b1040dd23e-serving-cert\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515451 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d20382c6-63e0-44c4-994b-952a489ece50-trusted-ca\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515595 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515770 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-audit\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515872 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515913 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b2a373-92d3-4af2-94e3-e4611dbd7785-config\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.515935 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-oauth-serving-cert\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.516181 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8d4a47e-b68c-428e-9e69-11b1040dd23e-service-ca-bundle\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.518755 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-etcd-client\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.518844 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d20382c6-63e0-44c4-994b-952a489ece50-serving-cert\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.518912 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.518920 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77aaec88-130d-43b9-9828-24098fc3748d-metrics-tls\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.518959 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-serving-cert\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519032 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-serving-cert\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519096 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-oauth-config\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519125 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c2b2a373-92d3-4af2-94e3-e4611dbd7785-machine-approver-tls\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519154 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519402 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fee1fa7-f83e-4be4-88f0-ed57f5f1d051-metrics-tls\") pod \"dns-operator-744455d44c-82b7j\" (UID: \"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051\") " pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519469 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519581 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-etcd-client\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519673 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6e5907-552b-4dc7-884f-d766a773e8b0-encryption-config\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519709 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519805 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5af5cc-4a80-4fe0-9c4a-498408cdc453-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ctztv\" (UID: \"fc5af5cc-4a80-4fe0-9c4a-498408cdc453\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.519822 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f7e5be0-b733-467d-afe0-35af7555688b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520051 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520063 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-encryption-config\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520058 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-audit-policies\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520301 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed66930a-e393-47ea-a98d-907a1327edac-serving-cert\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520307 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520581 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520639 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520713 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5qlbn\" (UID: \"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.520795 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-serving-cert\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.529861 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-image-import-ca\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.539337 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.547673 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed66930a-e393-47ea-a98d-907a1327edac-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.554233 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.574532 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.595108 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605094 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w62jw\" (UniqueName: \"kubernetes.io/projected/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-kube-api-access-w62jw\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605132 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ffec72-092a-4145-9136-d05df9fab68a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605179 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605203 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23125b22-0965-46a8-a698-dc256f032b3c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-86wr5\" (UID: \"23125b22-0965-46a8-a698-dc256f032b3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605245 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48ffec72-092a-4145-9136-d05df9fab68a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605277 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-node-bootstrap-token\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605365 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-srv-cert\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605387 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-certs\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605465 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4vzc\" (UniqueName: \"kubernetes.io/projected/23125b22-0965-46a8-a698-dc256f032b3c-kube-api-access-v4vzc\") pod \"multus-admission-controller-857f4d67dd-86wr5\" (UID: \"23125b22-0965-46a8-a698-dc256f032b3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605514 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605536 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605648 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48ffec72-092a-4145-9136-d05df9fab68a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605726 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8n9\" (UniqueName: \"kubernetes.io/projected/93fe0c31-5f71-4e0f-8325-3b246885136e-kube-api-access-6c8n9\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.605801 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-config\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.614035 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.623475 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.634584 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.640793 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.654102 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.658504 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-console-config\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.673951 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.695008 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.699193 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-service-ca\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.714492 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.741136 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.754184 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.774582 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.795263 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.814562 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.834194 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.840756 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48ffec72-092a-4145-9136-d05df9fab68a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.853632 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.873775 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.885547 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2895e12-d7e7-4eb4-8455-cae19e2347c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.894178 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.914438 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.917844 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.934619 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.954341 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.973900 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.977390 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-config\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.994213 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 09:12:56 crc kubenswrapper[4892]: I0122 09:12:56.996663 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ffec72-092a-4145-9136-d05df9fab68a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.014438 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.034764 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.053740 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.074384 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.079147 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23125b22-0965-46a8-a698-dc256f032b3c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-86wr5\" (UID: \"23125b22-0965-46a8-a698-dc256f032b3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.095539 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.134560 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.154210 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.174161 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.179149 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.195411 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.214824 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.234934 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.254387 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.274495 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.293741 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.312895 4892 request.go:700] Waited for 1.010845402s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.314376 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.333678 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.354938 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.373961 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.393968 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.413641 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.434173 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.454556 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.474280 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.494915 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.514818 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.519315 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-srv-cert\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.534225 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.554862 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.574417 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.594267 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 09:12:57 crc kubenswrapper[4892]: E0122 09:12:57.605820 4892 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:12:57 crc kubenswrapper[4892]: E0122 09:12:57.606043 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-certs podName:93fe0c31-5f71-4e0f-8325-3b246885136e nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.106019832 +0000 UTC m=+147.950098895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-certs") pod "machine-config-server-4knsq" (UID: "93fe0c31-5f71-4e0f-8325-3b246885136e") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:12:57 crc kubenswrapper[4892]: E0122 09:12:57.605845 4892 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 22 09:12:57 crc kubenswrapper[4892]: E0122 09:12:57.606254 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-node-bootstrap-token podName:93fe0c31-5f71-4e0f-8325-3b246885136e nodeName:}" failed. No retries permitted until 2026-01-22 09:12:58.106242408 +0000 UTC m=+147.950321471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-node-bootstrap-token") pod "machine-config-server-4knsq" (UID: "93fe0c31-5f71-4e0f-8325-3b246885136e") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.613989 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.634173 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.654177 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.674947 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.693879 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.713785 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.734326 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.753996 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.774412 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.795154 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.814155 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.834005 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.855273 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.874074 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.894121 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.914757 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.934122 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.954990 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.974378 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 09:12:57 crc kubenswrapper[4892]: I0122 09:12:57.994985 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.018135 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.034155 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.074555 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.094184 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.114618 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.126020 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-node-bootstrap-token\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.126135 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-certs\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.129433 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-node-bootstrap-token\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.130101 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/93fe0c31-5f71-4e0f-8325-3b246885136e-certs\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.134538 4892 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.155124 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.174985 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.193976 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.214201 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.235037 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.253866 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.286731 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npw47\" (UniqueName: \"kubernetes.io/projected/6a6e5907-552b-4dc7-884f-d766a773e8b0-kube-api-access-npw47\") pod \"apiserver-7bbb656c7d-lchcq\" (UID: \"6a6e5907-552b-4dc7-884f-d766a773e8b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.307704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdkhz\" (UniqueName: \"kubernetes.io/projected/09f94488-4261-4a70-ab65-e85c42ba3313-kube-api-access-vdkhz\") pod \"machine-api-operator-5694c8668f-mhgmk\" (UID: \"09f94488-4261-4a70-ab65-e85c42ba3313\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.313330 4892 request.go:700] Waited for 1.808930106s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.328233 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthkj\" (UniqueName: \"kubernetes.io/projected/c2b2a373-92d3-4af2-94e3-e4611dbd7785-kube-api-access-tthkj\") pod \"machine-approver-56656f9798-bkqvb\" (UID: \"c2b2a373-92d3-4af2-94e3-e4611dbd7785\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.328606 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:12:58 crc kubenswrapper[4892]: E0122 09:12:58.328726 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:15:00.328697845 +0000 UTC m=+270.172776978 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.328821 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.328954 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.328981 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.331334 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.331766 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.331835 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.338642 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.348851 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrptt\" (UniqueName: \"kubernetes.io/projected/5fee1fa7-f83e-4be4-88f0-ed57f5f1d051-kube-api-access-wrptt\") pod \"dns-operator-744455d44c-82b7j\" (UID: \"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051\") " pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.370892 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77aaec88-130d-43b9-9828-24098fc3748d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.390973 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg2t4\" (UniqueName: \"kubernetes.io/projected/a3ed54ef-a344-4af6-9e4c-abe9f8194edd-kube-api-access-lg2t4\") pod \"etcd-operator-b45778765-s9gdc\" (UID: \"a3ed54ef-a344-4af6-9e4c-abe9f8194edd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.406444 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.413972 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4qt\" (UniqueName: \"kubernetes.io/projected/ed66930a-e393-47ea-a98d-907a1327edac-kube-api-access-gn4qt\") pod \"apiserver-76f77b778f-zgz9g\" (UID: \"ed66930a-e393-47ea-a98d-907a1327edac\") " pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.414212 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.431324 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.432348 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.447451 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67kf\" (UniqueName: \"kubernetes.io/projected/3a357ae9-5621-4063-b475-508269240d98-kube-api-access-t67kf\") pod \"oauth-openshift-558db77b4-j4vqv\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.449702 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4855g\" (UniqueName: \"kubernetes.io/projected/beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6-kube-api-access-4855g\") pod \"cluster-samples-operator-665b6dd947-5qlbn\" (UID: \"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.476596 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.479219 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rwv\" (UniqueName: \"kubernetes.io/projected/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-kube-api-access-k9rwv\") pod \"controller-manager-879f6c89f-wtrx4\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.491189 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.494737 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59xg\" (UniqueName: \"kubernetes.io/projected/fa7d6587-5137-4b9b-accb-3b4800c1bce6-kube-api-access-h59xg\") pod \"marketplace-operator-79b997595-56s7c\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.505719 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.513517 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6khz\" (UniqueName: \"kubernetes.io/projected/d20382c6-63e0-44c4-994b-952a489ece50-kube-api-access-f6khz\") pod \"console-operator-58897d9998-4z8st\" (UID: \"d20382c6-63e0-44c4-994b-952a489ece50\") " pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.519835 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.521839 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.528505 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.533769 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skzhm\" (UniqueName: \"kubernetes.io/projected/fc5af5cc-4a80-4fe0-9c4a-498408cdc453-kube-api-access-skzhm\") pod \"package-server-manager-789f6589d5-ctztv\" (UID: \"fc5af5cc-4a80-4fe0-9c4a-498408cdc453\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.541518 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.550947 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwj7j\" (UniqueName: \"kubernetes.io/projected/77aaec88-130d-43b9-9828-24098fc3748d-kube-api-access-vwj7j\") pod \"ingress-operator-5b745b69d9-b7c5h\" (UID: \"77aaec88-130d-43b9-9828-24098fc3748d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:58 crc kubenswrapper[4892]: W0122 09:12:58.552462 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5c021f728e2422b78c96032991237dd58369d8e5538b3c9c1c7698019260c8b7 WatchSource:0}: Error finding container 5c021f728e2422b78c96032991237dd58369d8e5538b3c9c1c7698019260c8b7: Status 404 returned error can't find the container with id 5c021f728e2422b78c96032991237dd58369d8e5538b3c9c1c7698019260c8b7 Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.570845 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.576725 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcc7m\" (UniqueName: \"kubernetes.io/projected/c8d4a47e-b68c-428e-9e69-11b1040dd23e-kube-api-access-gcc7m\") pod \"authentication-operator-69f744f599-st5q7\" (UID: \"c8d4a47e-b68c-428e-9e69-11b1040dd23e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.589695 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fm7c\" (UniqueName: \"kubernetes.io/projected/ab72073f-69cb-4719-b896-54618a6925db-kube-api-access-7fm7c\") pod \"console-f9d7485db-95xtn\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.608629 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9r2s\" (UniqueName: \"kubernetes.io/projected/e2f01c3a-8bc3-460e-ba9e-3e21d9a15621-kube-api-access-v9r2s\") pod \"openshift-controller-manager-operator-756b6f6bc6-26bnl\" (UID: \"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.629814 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6j95\" (UniqueName: \"kubernetes.io/projected/5f7e5be0-b733-467d-afe0-35af7555688b-kube-api-access-j6j95\") pod \"openshift-apiserver-operator-796bbdcf4f-fdkt8\" (UID: \"5f7e5be0-b733-467d-afe0-35af7555688b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.630713 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.644514 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.650667 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsdz\" (UniqueName: \"kubernetes.io/projected/f2895e12-d7e7-4eb4-8455-cae19e2347c2-kube-api-access-ggsdz\") pod \"openshift-config-operator-7777fb866f-kw2w9\" (UID: \"f2895e12-d7e7-4eb4-8455-cae19e2347c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.663766 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.678081 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsd6n\" (UniqueName: \"kubernetes.io/projected/a91f44ce-a5d5-4379-a443-c61626f142f7-kube-api-access-rsd6n\") pod \"route-controller-manager-6576b87f9c-vgpdt\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.689960 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w62jw\" (UniqueName: \"kubernetes.io/projected/0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3-kube-api-access-w62jw\") pod \"catalog-operator-68c6474976-96sxb\" (UID: \"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.720087 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.721173 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m5gnq\" (UID: \"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.732804 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4vzc\" (UniqueName: \"kubernetes.io/projected/23125b22-0965-46a8-a698-dc256f032b3c-kube-api-access-v4vzc\") pod \"multus-admission-controller-857f4d67dd-86wr5\" (UID: \"23125b22-0965-46a8-a698-dc256f032b3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.749124 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.751150 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48ffec72-092a-4145-9136-d05df9fab68a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bbzt8\" (UID: \"48ffec72-092a-4145-9136-d05df9fab68a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.758945 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq"] Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.762547 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.768045 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8n9\" (UniqueName: \"kubernetes.io/projected/93fe0c31-5f71-4e0f-8325-3b246885136e-kube-api-access-6c8n9\") pod \"machine-config-server-4knsq\" (UID: \"93fe0c31-5f71-4e0f-8325-3b246885136e\") " pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.784627 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.791779 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4knsq" Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.796934 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" Jan 22 09:12:58 crc kubenswrapper[4892]: W0122 09:12:58.797957 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6e5907_552b_4dc7_884f_d766a773e8b0.slice/crio-48d1b40a40526ca245c5f2e1229bb888bab73e0c77841496d2d5136cbe3050f5 WatchSource:0}: Error finding container 48d1b40a40526ca245c5f2e1229bb888bab73e0c77841496d2d5136cbe3050f5: Status 404 returned error can't find the container with id 48d1b40a40526ca245c5f2e1229bb888bab73e0c77841496d2d5136cbe3050f5 Jan 22 09:12:58 crc kubenswrapper[4892]: I0122 09:12:58.808025 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mhgmk"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.527831 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.528856 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.529554 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.529843 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.530361 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.530957 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.531378 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" Jan 22 09:12:59 crc kubenswrapper[4892]: W0122 09:12:59.536243 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f94488_4261_4a70_ab65_e85c42ba3313.slice/crio-e333339c8f78fef60fdbad4db9fba87c15c259c2d38c49268b0467d4e700e2c8 WatchSource:0}: Error finding container e333339c8f78fef60fdbad4db9fba87c15c259c2d38c49268b0467d4e700e2c8: Status 404 returned error can't find the container with id e333339c8f78fef60fdbad4db9fba87c15c259c2d38c49268b0467d4e700e2c8 Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.543925 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82cc8d88-f16b-4fa6-90f3-c661418b5a12-serving-cert\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.543984 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82cc8d88-f16b-4fa6-90f3-c661418b5a12-config\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.544048 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-default-certificate\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.544078 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/71f045da-d27a-4ec0-a059-2107d4e0225c-kube-api-access-zwcdw\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.544108 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5khn\" (UniqueName: \"kubernetes.io/projected/7a02064e-95a9-4e55-b21e-45868b0362f2-kube-api-access-v5khn\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.544180 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.544735 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/909e154f-849c-4f69-96fb-74649b6db346-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.545531 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a02064e-95a9-4e55-b21e-45868b0362f2-tmpfs\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.545587 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.545812 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e92cbfe3-753e-4263-b38b-05c6d89fc48c-proxy-tls\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: E0122 09:12:59.546827 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.046805491 +0000 UTC m=+149.890884554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.546886 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71f045da-d27a-4ec0-a059-2107d4e0225c-signing-cabundle\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.547033 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlht\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-kube-api-access-tvlht\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.547191 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b91d631a-7830-4dc7-a6f1-31de28781a82-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.547249 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-service-ca-bundle\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.547352 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b91d631a-7830-4dc7-a6f1-31de28781a82-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.547848 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svszb\" (UniqueName: \"kubernetes.io/projected/d0142ef1-2eb1-43e9-99f3-e81a44383bd0-kube-api-access-svszb\") pod \"downloads-7954f5f757-smrbs\" (UID: \"d0142ef1-2eb1-43e9-99f3-e81a44383bd0\") " pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.548023 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a02064e-95a9-4e55-b21e-45868b0362f2-apiservice-cert\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.548242 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-tls\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.549144 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.550430 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479sg\" (UniqueName: \"kubernetes.io/projected/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-kube-api-access-479sg\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.550598 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlj69\" (UniqueName: \"kubernetes.io/projected/3a01b910-5841-4f20-b270-c7040213ac8d-kube-api-access-xlj69\") pod \"control-plane-machine-set-operator-78cbb6b69f-wf6dw\" (UID: \"3a01b910-5841-4f20-b270-c7040213ac8d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.550680 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b91d631a-7830-4dc7-a6f1-31de28781a82-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.550751 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqpkd\" (UniqueName: \"kubernetes.io/projected/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-kube-api-access-fqpkd\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.608231 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4z8st"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.608387 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.608494 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c021f728e2422b78c96032991237dd58369d8e5538b3c9c1c7698019260c8b7"} Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.608579 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56s7c"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.609785 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" event={"ID":"6a6e5907-552b-4dc7-884f-d766a773e8b0","Type":"ContainerStarted","Data":"48d1b40a40526ca245c5f2e1229bb888bab73e0c77841496d2d5136cbe3050f5"} Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.610098 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5b2a70f-548b-4483-a064-4f993d38286a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.610508 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2br\" (UniqueName: \"kubernetes.io/projected/e92cbfe3-753e-4263-b38b-05c6d89fc48c-kube-api-access-7g2br\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.610660 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-trusted-ca\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.610816 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-bound-sa-token\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.610919 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-stats-auth\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.611013 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" event={"ID":"c2b2a373-92d3-4af2-94e3-e4611dbd7785","Type":"ContainerStarted","Data":"5963cd3f1edb05ae1d4e15a22168e932d7f5214d2dc7becef8d7e2a5d46e2748"} Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.611024 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71f045da-d27a-4ec0-a059-2107d4e0225c-signing-key\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.611267 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmlfh\" (UniqueName: \"kubernetes.io/projected/7bd4dc40-4703-47e1-93d5-5995342429e3-kube-api-access-kmlfh\") pod \"migrator-59844c95c7-698rc\" (UID: \"7bd4dc40-4703-47e1-93d5-5995342429e3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.611391 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-config\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.611546 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-certificates\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612047 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612077 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-srv-cert\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612261 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612364 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcgb\" (UniqueName: \"kubernetes.io/projected/a5b2a70f-548b-4483-a064-4f993d38286a-kube-api-access-nxcgb\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612414 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d5w\" (UniqueName: \"kubernetes.io/projected/82cc8d88-f16b-4fa6-90f3-c661418b5a12-kube-api-access-29d5w\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612453 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5b2a70f-548b-4483-a064-4f993d38286a-proxy-tls\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612517 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/665bbb60-4e79-4d6d-b805-1e03ef3442be-secret-volume\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612543 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g96m\" (UniqueName: \"kubernetes.io/projected/909e154f-849c-4f69-96fb-74649b6db346-kube-api-access-9g96m\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612580 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-metrics-certs\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612602 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e92cbfe3-753e-4263-b38b-05c6d89fc48c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612648 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/665bbb60-4e79-4d6d-b805-1e03ef3442be-config-volume\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612668 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lk47\" (UniqueName: \"kubernetes.io/projected/665bbb60-4e79-4d6d-b805-1e03ef3442be-kube-api-access-2lk47\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.612690 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a02064e-95a9-4e55-b21e-45868b0362f2-webhook-cert\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.613114 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.613195 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s75xn\" (UniqueName: \"kubernetes.io/projected/b91d631a-7830-4dc7-a6f1-31de28781a82-kube-api-access-s75xn\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.613218 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5b2a70f-548b-4483-a064-4f993d38286a-images\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.613239 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a01b910-5841-4f20-b270-c7040213ac8d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wf6dw\" (UID: \"3a01b910-5841-4f20-b270-c7040213ac8d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.613270 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909e154f-849c-4f69-96fb-74649b6db346-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.621973 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zgz9g"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.624835 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j4vqv"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.626501 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s9gdc"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.628192 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.630596 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.631808 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-82b7j"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.633828 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtrx4"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.635512 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8"] Jan 22 09:12:59 crc kubenswrapper[4892]: W0122 09:12:59.669159 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fee1fa7_f83e_4be4_88f0_ed57f5f1d051.slice/crio-f175472aa6d99eeaf9b84b3e6ce1a5728aa3af5f28e9bc6f529049f75d3433b6 WatchSource:0}: Error finding container f175472aa6d99eeaf9b84b3e6ce1a5728aa3af5f28e9bc6f529049f75d3433b6: Status 404 returned error can't find the container with id f175472aa6d99eeaf9b84b3e6ce1a5728aa3af5f28e9bc6f529049f75d3433b6 Jan 22 09:12:59 crc kubenswrapper[4892]: W0122 09:12:59.671778 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5b8d1f_6184_4d2f_81f5_bca2d7b198a3.slice/crio-dc1ceaf97bf892064bd9b18ae98972fd2d14e25a4d42bd43d6ddace4ec784f73 WatchSource:0}: Error finding container dc1ceaf97bf892064bd9b18ae98972fd2d14e25a4d42bd43d6ddace4ec784f73: Status 404 returned error can't find the container with id dc1ceaf97bf892064bd9b18ae98972fd2d14e25a4d42bd43d6ddace4ec784f73 Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.716598 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.716940 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxcgb\" (UniqueName: \"kubernetes.io/projected/a5b2a70f-548b-4483-a064-4f993d38286a-kube-api-access-nxcgb\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: E0122 09:12:59.716964 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.21693821 +0000 UTC m=+150.061017283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717091 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d5w\" (UniqueName: \"kubernetes.io/projected/82cc8d88-f16b-4fa6-90f3-c661418b5a12-kube-api-access-29d5w\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717121 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-mountpoint-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717152 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5b2a70f-548b-4483-a064-4f993d38286a-proxy-tls\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717184 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/665bbb60-4e79-4d6d-b805-1e03ef3442be-secret-volume\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717211 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g96m\" (UniqueName: \"kubernetes.io/projected/909e154f-849c-4f69-96fb-74649b6db346-kube-api-access-9g96m\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717247 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e92cbfe3-753e-4263-b38b-05c6d89fc48c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717271 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-metrics-certs\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717344 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/665bbb60-4e79-4d6d-b805-1e03ef3442be-config-volume\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717372 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lk47\" (UniqueName: \"kubernetes.io/projected/665bbb60-4e79-4d6d-b805-1e03ef3442be-kube-api-access-2lk47\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717393 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a02064e-95a9-4e55-b21e-45868b0362f2-webhook-cert\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717431 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717470 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s75xn\" (UniqueName: \"kubernetes.io/projected/b91d631a-7830-4dc7-a6f1-31de28781a82-kube-api-access-s75xn\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717492 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5b2a70f-548b-4483-a064-4f993d38286a-images\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717517 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a01b910-5841-4f20-b270-c7040213ac8d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wf6dw\" (UID: \"3a01b910-5841-4f20-b270-c7040213ac8d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717545 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909e154f-849c-4f69-96fb-74649b6db346-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717574 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvxg\" (UniqueName: \"kubernetes.io/projected/3d0f0ae0-7c09-407c-b260-127ff7828c39-kube-api-access-dkvxg\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717621 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82cc8d88-f16b-4fa6-90f3-c661418b5a12-serving-cert\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717645 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82cc8d88-f16b-4fa6-90f3-c661418b5a12-config\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717682 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-socket-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717705 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5khn\" (UniqueName: \"kubernetes.io/projected/7a02064e-95a9-4e55-b21e-45868b0362f2-kube-api-access-v5khn\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717729 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.717993 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-default-certificate\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718018 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/71f045da-d27a-4ec0-a059-2107d4e0225c-kube-api-access-zwcdw\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718069 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d0f0ae0-7c09-407c-b260-127ff7828c39-metrics-tls\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718093 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0595af03-a344-4c6c-95ae-0d73f99237ba-cert\") pod \"ingress-canary-x78kb\" (UID: \"0595af03-a344-4c6c-95ae-0d73f99237ba\") " pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718118 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/909e154f-849c-4f69-96fb-74649b6db346-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718156 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a02064e-95a9-4e55-b21e-45868b0362f2-tmpfs\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718205 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718238 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e92cbfe3-753e-4263-b38b-05c6d89fc48c-proxy-tls\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718354 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71f045da-d27a-4ec0-a059-2107d4e0225c-signing-cabundle\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718389 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-registration-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718427 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlht\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-kube-api-access-tvlht\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718495 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b91d631a-7830-4dc7-a6f1-31de28781a82-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.718525 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-service-ca-bundle\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: E0122 09:12:59.719672 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.219650121 +0000 UTC m=+150.063729184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735487 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0f0ae0-7c09-407c-b260-127ff7828c39-config-volume\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735528 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b91d631a-7830-4dc7-a6f1-31de28781a82-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735658 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svszb\" (UniqueName: \"kubernetes.io/projected/d0142ef1-2eb1-43e9-99f3-e81a44383bd0-kube-api-access-svszb\") pod \"downloads-7954f5f757-smrbs\" (UID: \"d0142ef1-2eb1-43e9-99f3-e81a44383bd0\") " pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735677 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5c6k\" (UniqueName: \"kubernetes.io/projected/0595af03-a344-4c6c-95ae-0d73f99237ba-kube-api-access-l5c6k\") pod \"ingress-canary-x78kb\" (UID: \"0595af03-a344-4c6c-95ae-0d73f99237ba\") " pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735723 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a02064e-95a9-4e55-b21e-45868b0362f2-apiservice-cert\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735753 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-tls\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735900 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.735925 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479sg\" (UniqueName: \"kubernetes.io/projected/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-kube-api-access-479sg\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.736052 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlj69\" (UniqueName: \"kubernetes.io/projected/3a01b910-5841-4f20-b270-c7040213ac8d-kube-api-access-xlj69\") pod \"control-plane-machine-set-operator-78cbb6b69f-wf6dw\" (UID: \"3a01b910-5841-4f20-b270-c7040213ac8d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.736070 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-csi-data-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.736657 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/909e154f-849c-4f69-96fb-74649b6db346-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.736765 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/665bbb60-4e79-4d6d-b805-1e03ef3442be-config-volume\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.736812 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e92cbfe3-753e-4263-b38b-05c6d89fc48c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.734336 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/71f045da-d27a-4ec0-a059-2107d4e0225c-signing-cabundle\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737273 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b91d631a-7830-4dc7-a6f1-31de28781a82-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737422 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqpkd\" (UniqueName: \"kubernetes.io/projected/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-kube-api-access-fqpkd\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737444 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5b2a70f-548b-4483-a064-4f993d38286a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737495 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2br\" (UniqueName: \"kubernetes.io/projected/e92cbfe3-753e-4263-b38b-05c6d89fc48c-kube-api-access-7g2br\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737550 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-trusted-ca\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737635 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-stats-auth\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737650 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-bound-sa-token\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737690 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71f045da-d27a-4ec0-a059-2107d4e0225c-signing-key\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737740 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmlfh\" (UniqueName: \"kubernetes.io/projected/7bd4dc40-4703-47e1-93d5-5995342429e3-kube-api-access-kmlfh\") pod \"migrator-59844c95c7-698rc\" (UID: \"7bd4dc40-4703-47e1-93d5-5995342429e3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737799 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-config\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737838 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-certificates\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737904 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-plugins-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737939 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737954 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-srv-cert\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737968 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.737991 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74nd8\" (UniqueName: \"kubernetes.io/projected/1949fb71-227b-4649-b4e2-62b3d7519dec-kube-api-access-74nd8\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.738774 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909e154f-849c-4f69-96fb-74649b6db346-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.731982 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-service-ca-bundle\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.722823 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a02064e-95a9-4e55-b21e-45868b0362f2-tmpfs\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.723020 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82cc8d88-f16b-4fa6-90f3-c661418b5a12-config\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.734631 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82cc8d88-f16b-4fa6-90f3-c661418b5a12-serving-cert\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.742328 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b91d631a-7830-4dc7-a6f1-31de28781a82-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.744857 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b91d631a-7830-4dc7-a6f1-31de28781a82-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.744968 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5b2a70f-548b-4483-a064-4f993d38286a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.745137 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a01b910-5841-4f20-b270-c7040213ac8d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wf6dw\" (UID: \"3a01b910-5841-4f20-b270-c7040213ac8d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.745816 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-metrics-certs\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.746115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5b2a70f-548b-4483-a064-4f993d38286a-proxy-tls\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.746198 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.733538 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e92cbfe3-753e-4263-b38b-05c6d89fc48c-proxy-tls\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.734265 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5b2a70f-548b-4483-a064-4f993d38286a-images\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.747519 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/665bbb60-4e79-4d6d-b805-1e03ef3442be-secret-volume\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.747568 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-trusted-ca\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.748412 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.749198 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-config\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.750763 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-certificates\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.751362 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.755216 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-stats-auth\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.755222 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlht\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-kube-api-access-tvlht\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.755621 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-srv-cert\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.757971 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.759157 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g96m\" (UniqueName: \"kubernetes.io/projected/909e154f-849c-4f69-96fb-74649b6db346-kube-api-access-9g96m\") pod \"kube-storage-version-migrator-operator-b67b599dd-vw5wt\" (UID: \"909e154f-849c-4f69-96fb-74649b6db346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.759610 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a02064e-95a9-4e55-b21e-45868b0362f2-webhook-cert\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.760058 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-tls\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.760087 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pjgc7\" (UID: \"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.760470 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-default-certificate\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.760948 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b91d631a-7830-4dc7-a6f1-31de28781a82-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.762050 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a02064e-95a9-4e55-b21e-45868b0362f2-apiservice-cert\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.762840 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lk47\" (UniqueName: \"kubernetes.io/projected/665bbb60-4e79-4d6d-b805-1e03ef3442be-kube-api-access-2lk47\") pod \"collect-profiles-29484540-bgd4w\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.763351 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/71f045da-d27a-4ec0-a059-2107d4e0225c-signing-key\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.763429 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s75xn\" (UniqueName: \"kubernetes.io/projected/b91d631a-7830-4dc7-a6f1-31de28781a82-kube-api-access-s75xn\") pod \"cluster-image-registry-operator-dc59b4c8b-hzsh9\" (UID: \"b91d631a-7830-4dc7-a6f1-31de28781a82\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.766045 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxcgb\" (UniqueName: \"kubernetes.io/projected/a5b2a70f-548b-4483-a064-4f993d38286a-kube-api-access-nxcgb\") pod \"machine-config-operator-74547568cd-f5sj6\" (UID: \"a5b2a70f-548b-4483-a064-4f993d38286a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.766491 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/71f045da-d27a-4ec0-a059-2107d4e0225c-kube-api-access-zwcdw\") pod \"service-ca-9c57cc56f-cdmgc\" (UID: \"71f045da-d27a-4ec0-a059-2107d4e0225c\") " pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.766765 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlj69\" (UniqueName: \"kubernetes.io/projected/3a01b910-5841-4f20-b270-c7040213ac8d-kube-api-access-xlj69\") pod \"control-plane-machine-set-operator-78cbb6b69f-wf6dw\" (UID: \"3a01b910-5841-4f20-b270-c7040213ac8d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.767860 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479sg\" (UniqueName: \"kubernetes.io/projected/0fbd9767-4b93-4496-b8e8-6a7d7f50df5e-kube-api-access-479sg\") pod \"olm-operator-6b444d44fb-pprl2\" (UID: \"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.769076 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d5w\" (UniqueName: \"kubernetes.io/projected/82cc8d88-f16b-4fa6-90f3-c661418b5a12-kube-api-access-29d5w\") pod \"service-ca-operator-777779d784-wfs96\" (UID: \"82cc8d88-f16b-4fa6-90f3-c661418b5a12\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.769760 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-bound-sa-token\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.770212 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5khn\" (UniqueName: \"kubernetes.io/projected/7a02064e-95a9-4e55-b21e-45868b0362f2-kube-api-access-v5khn\") pod \"packageserver-d55dfcdfc-pkwbh\" (UID: \"7a02064e-95a9-4e55-b21e-45868b0362f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.770810 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmlfh\" (UniqueName: \"kubernetes.io/projected/7bd4dc40-4703-47e1-93d5-5995342429e3-kube-api-access-kmlfh\") pod \"migrator-59844c95c7-698rc\" (UID: \"7bd4dc40-4703-47e1-93d5-5995342429e3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.772159 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svszb\" (UniqueName: \"kubernetes.io/projected/d0142ef1-2eb1-43e9-99f3-e81a44383bd0-kube-api-access-svszb\") pod \"downloads-7954f5f757-smrbs\" (UID: \"d0142ef1-2eb1-43e9-99f3-e81a44383bd0\") " pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.773075 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2br\" (UniqueName: \"kubernetes.io/projected/e92cbfe3-753e-4263-b38b-05c6d89fc48c-kube-api-access-7g2br\") pod \"machine-config-controller-84d6567774-n2ds6\" (UID: \"e92cbfe3-753e-4263-b38b-05c6d89fc48c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.789730 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.794632 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqpkd\" (UniqueName: \"kubernetes.io/projected/1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2-kube-api-access-fqpkd\") pod \"router-default-5444994796-bknqb\" (UID: \"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2\") " pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.809816 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839481 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839792 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-plugins-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839835 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74nd8\" (UniqueName: \"kubernetes.io/projected/1949fb71-227b-4649-b4e2-62b3d7519dec-kube-api-access-74nd8\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839875 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-mountpoint-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839926 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvxg\" (UniqueName: \"kubernetes.io/projected/3d0f0ae0-7c09-407c-b260-127ff7828c39-kube-api-access-dkvxg\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839958 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-socket-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.839982 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d0f0ae0-7c09-407c-b260-127ff7828c39-metrics-tls\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840005 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0595af03-a344-4c6c-95ae-0d73f99237ba-cert\") pod \"ingress-canary-x78kb\" (UID: \"0595af03-a344-4c6c-95ae-0d73f99237ba\") " pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840043 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-registration-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840072 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0f0ae0-7c09-407c-b260-127ff7828c39-config-volume\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840097 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5c6k\" (UniqueName: \"kubernetes.io/projected/0595af03-a344-4c6c-95ae-0d73f99237ba-kube-api-access-l5c6k\") pod \"ingress-canary-x78kb\" (UID: \"0595af03-a344-4c6c-95ae-0d73f99237ba\") " pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840129 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-csi-data-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840253 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-csi-data-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840513 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-socket-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: E0122 09:12:59.840594 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.340577417 +0000 UTC m=+150.184656480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840635 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-plugins-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840754 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-mountpoint-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.840843 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1949fb71-227b-4649-b4e2-62b3d7519dec-registration-dir\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.841943 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0f0ae0-7c09-407c-b260-127ff7828c39-config-volume\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.844982 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0595af03-a344-4c6c-95ae-0d73f99237ba-cert\") pod \"ingress-canary-x78kb\" (UID: \"0595af03-a344-4c6c-95ae-0d73f99237ba\") " pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.845305 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d0f0ae0-7c09-407c-b260-127ff7828c39-metrics-tls\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.869894 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.877434 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.883244 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74nd8\" (UniqueName: \"kubernetes.io/projected/1949fb71-227b-4649-b4e2-62b3d7519dec-kube-api-access-74nd8\") pod \"csi-hostpathplugin-rhbtn\" (UID: \"1949fb71-227b-4649-b4e2-62b3d7519dec\") " pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.885101 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.898154 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvxg\" (UniqueName: \"kubernetes.io/projected/3d0f0ae0-7c09-407c-b260-127ff7828c39-kube-api-access-dkvxg\") pod \"dns-default-hk2bk\" (UID: \"3d0f0ae0-7c09-407c-b260-127ff7828c39\") " pod="openshift-dns/dns-default-hk2bk" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.898494 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.905212 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.912089 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.917908 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5c6k\" (UniqueName: \"kubernetes.io/projected/0595af03-a344-4c6c-95ae-0d73f99237ba-kube-api-access-l5c6k\") pod \"ingress-canary-x78kb\" (UID: \"0595af03-a344-4c6c-95ae-0d73f99237ba\") " pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.918594 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.933967 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.941597 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:12:59 crc kubenswrapper[4892]: E0122 09:12:59.941921 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.441906619 +0000 UTC m=+150.285985682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.946130 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.952230 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.959774 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.973259 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.986402 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h"] Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.988739 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:12:59 crc kubenswrapper[4892]: I0122 09:12:59.995889 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hk2bk" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.014164 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.019733 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x78kb" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.038678 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.044598 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.044921 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.544905605 +0000 UTC m=+150.388984668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.088504 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-95xtn"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.117765 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-86wr5"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.145691 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.146005 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.6459934 +0000 UTC m=+150.490072463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.168473 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.246311 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.247134 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.747116156 +0000 UTC m=+150.591195219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: W0122 09:13:00.326337 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ffec72_092a_4145_9136_d05df9fab68a.slice/crio-d4216fef0ca98d1a0a1590f688aa3310b061c017bef54a94947a12172fc29fca WatchSource:0}: Error finding container d4216fef0ca98d1a0a1590f688aa3310b061c017bef54a94947a12172fc29fca: Status 404 returned error can't find the container with id d4216fef0ca98d1a0a1590f688aa3310b061c017bef54a94947a12172fc29fca Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.349671 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.350397 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.850381629 +0000 UTC m=+150.694460692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.450605 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.451341 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:00.95131863 +0000 UTC m=+150.795397693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.508045 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.515588 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.555239 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.555626 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.05561027 +0000 UTC m=+150.899689333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.563934 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-st5q7"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.569086 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.598897 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.647877 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.656439 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.656656 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.156616313 +0000 UTC m=+151.000695376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.656833 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.657234 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.157226299 +0000 UTC m=+151.001305362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.677703 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" event={"ID":"fa7d6587-5137-4b9b-accb-3b4800c1bce6","Type":"ContainerStarted","Data":"0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.677742 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" event={"ID":"fa7d6587-5137-4b9b-accb-3b4800c1bce6","Type":"ContainerStarted","Data":"0bfe254aa2d484477c6c89466dd95a27aaae97d1b415d8faf04072499e44313f"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.678277 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.682031 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" event={"ID":"3a357ae9-5621-4063-b475-508269240d98","Type":"ContainerStarted","Data":"ab956946fd34e19c4a008613073246725924b9e5f4bde980e711c1d5a623756b"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.683094 4892 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-56s7c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.683142 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" podUID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.685352 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" event={"ID":"a3ed54ef-a344-4af6-9e4c-abe9f8194edd","Type":"ContainerStarted","Data":"7d63cb3c195f264eeed3fd6168e3cb5ddfb7c186882763b638ffd1bf93f29cda"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.685409 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" event={"ID":"a3ed54ef-a344-4af6-9e4c-abe9f8194edd","Type":"ContainerStarted","Data":"cf84e870213be3a956774f1454593d0d907f79a56356003dd134ab829b9212f5"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.687429 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" event={"ID":"c2b2a373-92d3-4af2-94e3-e4611dbd7785","Type":"ContainerStarted","Data":"f66cd43027013646fa78ed83aa1733fbd4e8e1a7b9deb8600f7e20a0280b4285"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.690738 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95xtn" event={"ID":"ab72073f-69cb-4719-b896-54618a6925db","Type":"ContainerStarted","Data":"af947ddd1f69a52e1627614778a5dcb4d5bccc7fc996b43da36f5102463d7c2f"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.706216 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" event={"ID":"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6","Type":"ContainerStarted","Data":"339f3ea9d2107a323363c0a84906d320e13859df344c40451804132d5e084f75"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.706257 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" event={"ID":"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6","Type":"ContainerStarted","Data":"023ed3c07315fc2816557ce9f0a69668e4fe822f1006c49dedeecb3b49d60be4"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.716560 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.726556 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c818d565a8f0738e9f2e9e595c3a4fbb00d65c5b1c101fb7ec736b1b7fae08cb"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.727004 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.731568 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" event={"ID":"48ffec72-092a-4145-9136-d05df9fab68a","Type":"ContainerStarted","Data":"d4216fef0ca98d1a0a1590f688aa3310b061c017bef54a94947a12172fc29fca"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.732516 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" event={"ID":"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051","Type":"ContainerStarted","Data":"f175472aa6d99eeaf9b84b3e6ce1a5728aa3af5f28e9bc6f529049f75d3433b6"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.746685 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" event={"ID":"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3","Type":"ContainerStarted","Data":"48331007743b9b54d2df13bd172563548dc87fed20cbe6159e0dd0481435cd6b"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.746749 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" event={"ID":"0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3","Type":"ContainerStarted","Data":"dc1ceaf97bf892064bd9b18ae98972fd2d14e25a4d42bd43d6ddace4ec784f73"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.749424 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.755696 4892 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-96sxb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.755755 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" podUID="0f5b8d1f-6184-4d2f-81f5-bca2d7b198a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.759849 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.760345 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.260331007 +0000 UTC m=+151.104410070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.769808 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"68605d71894d2e983c0e8b28b365e66c81668bdd8febbeb0e5fbccac4b4ed31d"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.769847 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"58d7b15f4ded822128006789bec6ed3d56d01b3de451bcd2d82b00f23d2f89be"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.785148 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" event={"ID":"f2895e12-d7e7-4eb4-8455-cae19e2347c2","Type":"ContainerStarted","Data":"f617bc75234f53a30ef39a56b69ea527d7fd4431c59c282868545e2646f0f03c"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.786101 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7"] Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.800051 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4knsq" event={"ID":"93fe0c31-5f71-4e0f-8325-3b246885136e","Type":"ContainerStarted","Data":"377b4878d1b8c8f62b29309285358ccc59e9ca9b697bc506592f39c8a44aeebb"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.804024 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" event={"ID":"09f94488-4261-4a70-ab65-e85c42ba3313","Type":"ContainerStarted","Data":"1ccb6f6c3a4215b2808c0d67057f68f2e0eeb34328f921bc52f0760e1c450069"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.804059 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" event={"ID":"09f94488-4261-4a70-ab65-e85c42ba3313","Type":"ContainerStarted","Data":"e333339c8f78fef60fdbad4db9fba87c15c259c2d38c49268b0467d4e700e2c8"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.812895 4892 generic.go:334] "Generic (PLEG): container finished" podID="6a6e5907-552b-4dc7-884f-d766a773e8b0" containerID="068aed68eacfc6695f0ff10d1a7de28d68e217631ce76b4c89ba4d357814dee0" exitCode=0 Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.813452 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" event={"ID":"6a6e5907-552b-4dc7-884f-d766a773e8b0","Type":"ContainerDied","Data":"068aed68eacfc6695f0ff10d1a7de28d68e217631ce76b4c89ba4d357814dee0"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.834407 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" event={"ID":"77aaec88-130d-43b9-9828-24098fc3748d","Type":"ContainerStarted","Data":"72e63384330bf585b939cb896b761813bc3beb77c22408f5f68d415bfc282e54"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.861771 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" event={"ID":"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97","Type":"ContainerStarted","Data":"f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.861832 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" event={"ID":"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97","Type":"ContainerStarted","Data":"5801c2b5cb6af330bc0dab2f3d75f8e27e2136c2980b847016b820ec1cf86f99"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.862494 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.912790 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.915302 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.415261657 +0000 UTC m=+151.259340720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.916513 4892 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wtrx4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.916551 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" podUID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.923195 4892 generic.go:334] "Generic (PLEG): container finished" podID="ed66930a-e393-47ea-a98d-907a1327edac" containerID="375465e651489071db59796491e733c9baf8e4487cec4da8f5e488397aa0402e" exitCode=0 Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.923503 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" event={"ID":"ed66930a-e393-47ea-a98d-907a1327edac","Type":"ContainerDied","Data":"375465e651489071db59796491e733c9baf8e4487cec4da8f5e488397aa0402e"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.923652 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" event={"ID":"ed66930a-e393-47ea-a98d-907a1327edac","Type":"ContainerStarted","Data":"d833de0ecbeced9bfbc09501f468bda9d79788a2c67dfd1bd65319518d02c66f"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.927879 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-s9gdc" podStartSLOduration=125.923680938 podStartE2EDuration="2m5.923680938s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:00.913261334 +0000 UTC m=+150.757340397" watchObservedRunningTime="2026-01-22 09:13:00.923680938 +0000 UTC m=+150.767759991" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.938594 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"de8c9b73a5681f31d3818ebc6bd111551c18f314ac1f3b634308372a192123a6"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.938652 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f4c776d9a6a556e3c21ff690b438957e7c26df017553d9209bd62f398c55d526"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.967196 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.967538 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.467512929 +0000 UTC m=+151.311591992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.967806 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:00 crc kubenswrapper[4892]: E0122 09:13:00.969269 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.469261085 +0000 UTC m=+151.313340148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.975446 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" event={"ID":"fc5af5cc-4a80-4fe0-9c4a-498408cdc453","Type":"ContainerStarted","Data":"e9b203f0c37cfb816717b10027a0025d541a5d4dcf89826756821c8ba94784c1"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.976130 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" event={"ID":"fc5af5cc-4a80-4fe0-9c4a-498408cdc453","Type":"ContainerStarted","Data":"11fa4fe6b1de589cb71a493d48b369b3a4801c42bfa47c8c2e06856c48e372e9"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.985271 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" event={"ID":"23125b22-0965-46a8-a698-dc256f032b3c","Type":"ContainerStarted","Data":"e5cb8bfe7314805409ff8407c4544d42ad6a1975dedd52ed43a63ca56d4dd773"} Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.987985 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" podStartSLOduration=125.987968227 podStartE2EDuration="2m5.987968227s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:00.986687743 +0000 UTC m=+150.830766806" watchObservedRunningTime="2026-01-22 09:13:00.987968227 +0000 UTC m=+150.832047280" Jan 22 09:13:00 crc kubenswrapper[4892]: I0122 09:13:00.999298 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" event={"ID":"a91f44ce-a5d5-4379-a443-c61626f142f7","Type":"ContainerStarted","Data":"1ab0fb8ab5537520477c976647d4fe6f808f1ba50a7d9229f5caa47f718e908e"} Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.027857 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4z8st" event={"ID":"d20382c6-63e0-44c4-994b-952a489ece50","Type":"ContainerStarted","Data":"60560c62d94f8103686bea9e92f5c070db96e20db1524d6d8ba0f6dcbbd80e32"} Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.027925 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4z8st" event={"ID":"d20382c6-63e0-44c4-994b-952a489ece50","Type":"ContainerStarted","Data":"ba4e0096ad6bffc3dc64196c49a585c49cd06648a667e64815459eeb8f923da9"} Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.028350 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.032244 4892 patch_prober.go:28] interesting pod/console-operator-58897d9998-4z8st container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.032305 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4z8st" podUID="d20382c6-63e0-44c4-994b-952a489ece50" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.049784 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" event={"ID":"5f7e5be0-b733-467d-afe0-35af7555688b","Type":"ContainerStarted","Data":"6814e3abee371563580119641330ba95ce897e9345824c60f3d5d8c20937d1ba"} Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.049836 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" event={"ID":"5f7e5be0-b733-467d-afe0-35af7555688b","Type":"ContainerStarted","Data":"f48ac5144b551b710aa1a75f4080a4447aae0cd074b1aea7e94348ea67f3e9ac"} Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.069677 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.069998 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.569980291 +0000 UTC m=+151.414059354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.075153 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" podStartSLOduration=126.075135416 podStartE2EDuration="2m6.075135416s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:01.072724293 +0000 UTC m=+150.916803366" watchObservedRunningTime="2026-01-22 09:13:01.075135416 +0000 UTC m=+150.919214479" Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.175305 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.176032 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.675985876 +0000 UTC m=+151.520064989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.248513 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4z8st" podStartSLOduration=126.24848939 podStartE2EDuration="2m6.24848939s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:01.195403566 +0000 UTC m=+151.039482629" watchObservedRunningTime="2026-01-22 09:13:01.24848939 +0000 UTC m=+151.092568453" Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.256788 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fdkt8" podStartSLOduration=126.256728337 podStartE2EDuration="2m6.256728337s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:01.24924975 +0000 UTC m=+151.093328813" watchObservedRunningTime="2026-01-22 09:13:01.256728337 +0000 UTC m=+151.100807400" Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.276893 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.277240 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.777225975 +0000 UTC m=+151.621305038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.378217 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.378726 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.878714661 +0000 UTC m=+151.722793724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.418391 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" podStartSLOduration=126.418368532 podStartE2EDuration="2m6.418368532s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:01.276546977 +0000 UTC m=+151.120626040" watchObservedRunningTime="2026-01-22 09:13:01.418368532 +0000 UTC m=+151.262447615" Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.482495 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.482888 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:01.982870237 +0000 UTC m=+151.826949310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.584782 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.586007 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.085989726 +0000 UTC m=+151.930068799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.650459 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw"] Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.688660 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.689159 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.189137735 +0000 UTC m=+152.033216808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.794941 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.795724 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.295703754 +0000 UTC m=+152.139782877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.829751 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cdmgc"] Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.890059 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2"] Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.892577 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x78kb"] Jan 22 09:13:01 crc kubenswrapper[4892]: I0122 09:13:01.896521 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:01 crc kubenswrapper[4892]: E0122 09:13:01.896993 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.396974744 +0000 UTC m=+152.241053807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.000297 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.000862 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.500850833 +0000 UTC m=+152.344929906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.105267 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-smrbs"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.106357 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.107053 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.607033502 +0000 UTC m=+152.451112575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.128063 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" event={"ID":"3a01b910-5841-4f20-b270-c7040213ac8d","Type":"ContainerStarted","Data":"8889984ecb2d0da1cb808725347a5d0396dc0efe8a6fff8da1c89d4b61241f30"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.156726 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4knsq" event={"ID":"93fe0c31-5f71-4e0f-8325-3b246885136e","Type":"ContainerStarted","Data":"9edd3ac610561f5416a5ec3a1e3eeb7c6eafe0e288cc134bf8349b5e41b5bf84"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.211332 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95xtn" event={"ID":"ab72073f-69cb-4719-b896-54618a6925db","Type":"ContainerStarted","Data":"a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.212232 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.213496 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.713486028 +0000 UTC m=+152.557565091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.250190 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" event={"ID":"beeba5d1-3ae7-4a2f-9e58-8b5baeea4ea6","Type":"ContainerStarted","Data":"15736db14318dbbdaea6937a5478e0e06e3f383bf12dac6314711b42d4729240"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.265910 4892 generic.go:334] "Generic (PLEG): container finished" podID="f2895e12-d7e7-4eb4-8455-cae19e2347c2" containerID="31c0b4be065b42777c9ca4fcd9bb8eabaebfa984430aebcddb5e9ea31c06289c" exitCode=0 Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.265978 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" event={"ID":"f2895e12-d7e7-4eb4-8455-cae19e2347c2","Type":"ContainerDied","Data":"31c0b4be065b42777c9ca4fcd9bb8eabaebfa984430aebcddb5e9ea31c06289c"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.283691 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4knsq" podStartSLOduration=6.283671282 podStartE2EDuration="6.283671282s" podCreationTimestamp="2026-01-22 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.243263391 +0000 UTC m=+152.087342454" watchObservedRunningTime="2026-01-22 09:13:02.283671282 +0000 UTC m=+152.127750345" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.286018 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wfs96"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.299621 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" event={"ID":"7bd4dc40-4703-47e1-93d5-5995342429e3","Type":"ContainerStarted","Data":"505ac1fbfdf2fe5be35cec0d1649f1a8419b829fff13bb939b763ce0ae510c96"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.299662 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" event={"ID":"7bd4dc40-4703-47e1-93d5-5995342429e3","Type":"ContainerStarted","Data":"e0a5ce4fc17c1a18af8185ba88824e4ff4226dbcd46ba0097a627b00b66b4c1b"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.312899 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.314341 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.814320737 +0000 UTC m=+152.658399800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.325589 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" event={"ID":"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051","Type":"ContainerStarted","Data":"0cbce2185de75e8582eabd0f87203aa05ea130c5abf44aa8565b94f02e0f33bd"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.338056 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" event={"ID":"71f045da-d27a-4ec0-a059-2107d4e0225c","Type":"ContainerStarted","Data":"70a2ae931add7ddb3263ddc79d87b174a39fc8b161bd9714ee145b852d8cdc20"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.394748 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" event={"ID":"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605","Type":"ContainerStarted","Data":"32a3fd682c811d6980491e9aa2c680502b088fd2db7a63e00e51bdcb29876bc0"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.403704 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.416496 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" event={"ID":"b91d631a-7830-4dc7-a6f1-31de28781a82","Type":"ContainerStarted","Data":"26434a175600c99f356d41e436e287bb147be27f8977b298f076bd2cfcf910ef"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.416541 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" event={"ID":"b91d631a-7830-4dc7-a6f1-31de28781a82","Type":"ContainerStarted","Data":"47a0ccfb3840b13cbe6590aacb8948dfdb96d33017583509f5b6c6cce4ae2a15"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.417245 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.417561 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:02.917549339 +0000 UTC m=+152.761628402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.426632 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.429673 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bknqb" event={"ID":"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2","Type":"ContainerStarted","Data":"f7ddb76e599e8076273f6573e5d4935b83bf45b99ef0c0e4a13d7a3a73c9ab30"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.429727 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bknqb" event={"ID":"1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2","Type":"ContainerStarted","Data":"21b3c7319d3f139e7deba06029080f6e70ffd0d6f2dc4fe6f584ba146e105148"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.442073 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hk2bk"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.454751 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-95xtn" podStartSLOduration=127.454732735 podStartE2EDuration="2m7.454732735s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.322134052 +0000 UTC m=+152.166213115" watchObservedRunningTime="2026-01-22 09:13:02.454732735 +0000 UTC m=+152.298811798" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.457520 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" event={"ID":"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7","Type":"ContainerStarted","Data":"c1683eff0a3ebf2190808863de3f40eadf59a79c3cebb7a08137de6210ebada9"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.467646 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.483085 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" event={"ID":"fc5af5cc-4a80-4fe0-9c4a-498408cdc453","Type":"ContainerStarted","Data":"3249886ce2bb02c39114ba0a83397aaee6284dffd48d6995e404dfddcdd66270"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.483752 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.485798 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" event={"ID":"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621","Type":"ContainerStarted","Data":"364664677d6ee207da3407984353d365d1d259bce6afba33ab4c53f544758258"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.485852 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" event={"ID":"e2f01c3a-8bc3-460e-ba9e-3e21d9a15621","Type":"ContainerStarted","Data":"a71ba14a64e587cb1d0c3ae11c21a4a555c3af17238b11977519867184a66544"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.490589 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rhbtn"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.495800 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w7q5c"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.496885 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.517697 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5qlbn" podStartSLOduration=127.517679329 podStartE2EDuration="2m7.517679329s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.422189671 +0000 UTC m=+152.266268734" watchObservedRunningTime="2026-01-22 09:13:02.517679329 +0000 UTC m=+152.361758392" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.518514 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.518689 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.018676385 +0000 UTC m=+152.862755448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.518957 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-catalog-content\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.519065 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-utilities\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.519154 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.519276 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8rm\" (UniqueName: \"kubernetes.io/projected/a1422496-2b87-44ce-b710-32e8de483ebd-kube-api-access-qd8rm\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.518577 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hzsh9" podStartSLOduration=127.518571212 podStartE2EDuration="2m7.518571212s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.457389625 +0000 UTC m=+152.301468688" watchObservedRunningTime="2026-01-22 09:13:02.518571212 +0000 UTC m=+152.362650275" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.519102 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:13:02 crc kubenswrapper[4892]: W0122 09:13:02.524327 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod909e154f_849c_4f69_96fb_74649b6db346.slice/crio-0bbfd66b6f83c1b7856314319aaa76c3826a8f8458c968292531b530053f8894 WatchSource:0}: Error finding container 0bbfd66b6f83c1b7856314319aaa76c3826a8f8458c968292531b530053f8894: Status 404 returned error can't find the container with id 0bbfd66b6f83c1b7856314319aaa76c3826a8f8458c968292531b530053f8894 Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.524989 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" event={"ID":"a5b2a70f-548b-4483-a064-4f993d38286a","Type":"ContainerStarted","Data":"29728ea4ab4f490f742c2110027f3d0c8ed1b9313cab889b97789f88c8cf2393"} Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.525113 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.025084593 +0000 UTC m=+152.869163656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.529652 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x78kb" event={"ID":"0595af03-a344-4c6c-95ae-0d73f99237ba","Type":"ContainerStarted","Data":"f5c96c40331369ac97ab7dc6905fcb8c1e38bbe0aa4bb883d250a349285d3a0b"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.535038 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7q5c"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.537427 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-bknqb" podStartSLOduration=127.537379806 podStartE2EDuration="2m7.537379806s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.525412482 +0000 UTC m=+152.369491545" watchObservedRunningTime="2026-01-22 09:13:02.537379806 +0000 UTC m=+152.381458869" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.538924 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" event={"ID":"c2b2a373-92d3-4af2-94e3-e4611dbd7785","Type":"ContainerStarted","Data":"374543202d68680fde47ea7d5d15cc8c2d0ca76d6acb72325102bd69a2b3c75c"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.556497 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" event={"ID":"77aaec88-130d-43b9-9828-24098fc3748d","Type":"ContainerStarted","Data":"552f4f7d2ca7b3bf15764fb02b53704fc8d733779f5b8d6b60ccdfad142c00f1"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.588093 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-26bnl" podStartSLOduration=127.588072548 podStartE2EDuration="2m7.588072548s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.587594785 +0000 UTC m=+152.431673838" watchObservedRunningTime="2026-01-22 09:13:02.588072548 +0000 UTC m=+152.432151611" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.622761 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-62vzg"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.632215 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" podStartSLOduration=127.632202057 podStartE2EDuration="2m7.632202057s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.630849242 +0000 UTC m=+152.474928305" watchObservedRunningTime="2026-01-22 09:13:02.632202057 +0000 UTC m=+152.476281120" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.632843 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643022 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" event={"ID":"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e","Type":"ContainerStarted","Data":"c504dad717cb1ac738cc7698c3b6d898afd22f931bfbaf2c0676ce52f205dc79"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643062 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643149 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.632904 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.132889455 +0000 UTC m=+152.976968518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643334 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-utilities\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643390 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643448 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd8rm\" (UniqueName: \"kubernetes.io/projected/a1422496-2b87-44ce-b710-32e8de483ebd-kube-api-access-qd8rm\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.643615 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-catalog-content\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.644078 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-catalog-content\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.644538 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-utilities\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.646700 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.654573 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62vzg"] Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.663417 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.163395826 +0000 UTC m=+153.007474889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.665606 4892 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pprl2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.665661 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" podUID="0fbd9767-4b93-4496-b8e8-6a7d7f50df5e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.667642 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bkqvb" podStartSLOduration=127.667628498 podStartE2EDuration="2m7.667628498s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.666613491 +0000 UTC m=+152.510692554" watchObservedRunningTime="2026-01-22 09:13:02.667628498 +0000 UTC m=+152.511707561" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.692968 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd8rm\" (UniqueName: \"kubernetes.io/projected/a1422496-2b87-44ce-b710-32e8de483ebd-kube-api-access-qd8rm\") pod \"certified-operators-w7q5c\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.703378 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" event={"ID":"a91f44ce-a5d5-4379-a443-c61626f142f7","Type":"ContainerStarted","Data":"9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.705399 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.717617 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" event={"ID":"09f94488-4261-4a70-ab65-e85c42ba3313","Type":"ContainerStarted","Data":"f5a59c78d0e2d0cfbd9d180fe03c99e25389dba6b46b751e03d6d6d9b01a7165"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.753718 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.754248 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-utilities\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.754375 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8x6\" (UniqueName: \"kubernetes.io/projected/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-kube-api-access-fd8x6\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.754414 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-catalog-content\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.755554 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.255536386 +0000 UTC m=+153.099615449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.758744 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" event={"ID":"665bbb60-4e79-4d6d-b805-1e03ef3442be","Type":"ContainerStarted","Data":"50e401fecbd113809c531c596cc8173a5ada786527b3f8311ad28336a2a8b5e1"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.776158 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.778963 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" event={"ID":"c8d4a47e-b68c-428e-9e69-11b1040dd23e","Type":"ContainerStarted","Data":"bec6c76950b5de342998f884ab43e2ae395cc95dbf021231d139618e7e11aff7"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.778995 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" event={"ID":"c8d4a47e-b68c-428e-9e69-11b1040dd23e","Type":"ContainerStarted","Data":"5cee5a867101e0ee025c6c8918b6c9cbe4c0c2103e8bff1df3d6b67c4272dd69"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.787003 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mhgmk" podStartSLOduration=127.786986002 podStartE2EDuration="2m7.786986002s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.785272847 +0000 UTC m=+152.629351910" watchObservedRunningTime="2026-01-22 09:13:02.786986002 +0000 UTC m=+152.631065055" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.787115 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" podStartSLOduration=127.787109235 podStartE2EDuration="2m7.787109235s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.753277796 +0000 UTC m=+152.597356859" watchObservedRunningTime="2026-01-22 09:13:02.787109235 +0000 UTC m=+152.631188298" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.792263 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" event={"ID":"3a357ae9-5621-4063-b475-508269240d98","Type":"ContainerStarted","Data":"fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671"} Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.792357 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.803887 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-96sxb" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.811467 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4z8st" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.818985 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.822616 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.825771 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.825976 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.839259 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" podStartSLOduration=127.839238344 podStartE2EDuration="2m7.839238344s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:02.826848129 +0000 UTC m=+152.670927192" watchObservedRunningTime="2026-01-22 09:13:02.839238344 +0000 UTC m=+152.683317407" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.851860 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.855579 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-utilities\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.855625 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.855793 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8x6\" (UniqueName: \"kubernetes.io/projected/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-kube-api-access-fd8x6\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.855860 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-catalog-content\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.861984 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.361963821 +0000 UTC m=+153.206042884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.887217 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-catalog-content\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.887428 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-utilities\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.902003 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.924359 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8x6\" (UniqueName: \"kubernetes.io/projected/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-kube-api-access-fd8x6\") pod \"community-operators-62vzg\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.955873 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.956038 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.956069 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.963965 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.964175 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-catalog-content\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.964249 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-utilities\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:02 crc kubenswrapper[4892]: I0122 09:13:02.964349 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnh7q\" (UniqueName: \"kubernetes.io/projected/0b80f630-aab1-47ce-87ba-f8f753f5664a-kube-api-access-gnh7q\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:02 crc kubenswrapper[4892]: E0122 09:13:02.964456 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.464441543 +0000 UTC m=+153.308520606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.008263 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.067139 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnh7q\" (UniqueName: \"kubernetes.io/projected/0b80f630-aab1-47ce-87ba-f8f753f5664a-kube-api-access-gnh7q\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.067189 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-catalog-content\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.067233 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-utilities\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.067272 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.067576 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.567564342 +0000 UTC m=+153.411643405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.068026 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-catalog-content\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.068223 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-utilities\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.080275 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" podStartSLOduration=128.080252315 podStartE2EDuration="2m8.080252315s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:03.068300851 +0000 UTC m=+152.912379914" watchObservedRunningTime="2026-01-22 09:13:03.080252315 +0000 UTC m=+152.924331378" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.080937 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k75r9"] Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.081916 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.095638 4892 csr.go:261] certificate signing request csr-htt9c is approved, waiting to be issued Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.097815 4892 csr.go:257] certificate signing request csr-htt9c is issued Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.098532 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k75r9"] Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.120025 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.129579 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnh7q\" (UniqueName: \"kubernetes.io/projected/0b80f630-aab1-47ce-87ba-f8f753f5664a-kube-api-access-gnh7q\") pod \"certified-operators-99vdf\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.155030 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" podStartSLOduration=128.155008509 podStartE2EDuration="2m8.155008509s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:03.142040708 +0000 UTC m=+152.986119771" watchObservedRunningTime="2026-01-22 09:13:03.155008509 +0000 UTC m=+152.999087572" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.169351 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.169646 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.669628593 +0000 UTC m=+153.513707656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.169913 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qh65\" (UniqueName: \"kubernetes.io/projected/e788dce1-96bb-4768-950f-08abe5d34305-kube-api-access-6qh65\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.170111 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-catalog-content\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.170247 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.170388 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-utilities\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.170842 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.670832875 +0000 UTC m=+153.514911938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.234773 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.270944 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.271421 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-catalog-content\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.271625 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-utilities\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.271731 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qh65\" (UniqueName: \"kubernetes.io/projected/e788dce1-96bb-4768-950f-08abe5d34305-kube-api-access-6qh65\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.272509 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.772485465 +0000 UTC m=+153.616564528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.273153 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-catalog-content\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.279528 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-utilities\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.317119 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" podStartSLOduration=128.317101707 podStartE2EDuration="2m8.317101707s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:03.27267434 +0000 UTC m=+153.116753403" watchObservedRunningTime="2026-01-22 09:13:03.317101707 +0000 UTC m=+153.161180760" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.373299 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.373604 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qh65\" (UniqueName: \"kubernetes.io/projected/e788dce1-96bb-4768-950f-08abe5d34305-kube-api-access-6qh65\") pod \"community-operators-k75r9\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.374095 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.874081004 +0000 UTC m=+153.718160067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.461014 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.475082 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.475473 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:03.975457097 +0000 UTC m=+153.819536160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.585870 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.586657 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.086639377 +0000 UTC m=+153.930718440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.687750 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.688112 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.188098442 +0000 UTC m=+154.032177505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.789006 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.789341 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.289325731 +0000 UTC m=+154.133404794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.853910 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" event={"ID":"7bd4dc40-4703-47e1-93d5-5995342429e3","Type":"ContainerStarted","Data":"43c60a576362ac725ab818623c7884247d08bd99b0736d1db3a2ab43fc122c6b"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.882346 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-698rc" podStartSLOduration=128.882329224 podStartE2EDuration="2m8.882329224s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:03.880064855 +0000 UTC m=+153.724143918" watchObservedRunningTime="2026-01-22 09:13:03.882329224 +0000 UTC m=+153.726408287" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.886753 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" event={"ID":"77aaec88-130d-43b9-9828-24098fc3748d","Type":"ContainerStarted","Data":"277dd392b304f766d536926eb956dca2325b6b4f6ee53addbf00bfd339d018e3"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.896212 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:03 crc kubenswrapper[4892]: E0122 09:13:03.896669 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.39665601 +0000 UTC m=+154.240735073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.904097 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" event={"ID":"71f045da-d27a-4ec0-a059-2107d4e0225c","Type":"ContainerStarted","Data":"1ee020544b85336df80e51a2c876595e20058fadb8e36906a7eaa6cac1d73396"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.911622 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" event={"ID":"a5b2a70f-548b-4483-a064-4f993d38286a","Type":"ContainerStarted","Data":"b5eb66a53b21e5bab235fc8860547b83302254ccd4b5e7395c585fa9d0d95617"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.911905 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" event={"ID":"a5b2a70f-548b-4483-a064-4f993d38286a","Type":"ContainerStarted","Data":"407fda49a459a5500c2210b9804d2c3ab112b0d048d8e8da471efa43f2aecc14"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.922389 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7c5h" podStartSLOduration=128.922378096 podStartE2EDuration="2m8.922378096s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:03.920770864 +0000 UTC m=+153.764849927" watchObservedRunningTime="2026-01-22 09:13:03.922378096 +0000 UTC m=+153.766457159" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.939178 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-smrbs" event={"ID":"d0142ef1-2eb1-43e9-99f3-e81a44383bd0","Type":"ContainerStarted","Data":"9cfc15cc8ba477172cf6a4e28ee6d7415b2663a889a78532ad8c2ed8ccd91489"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.939235 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-smrbs" event={"ID":"d0142ef1-2eb1-43e9-99f3-e81a44383bd0","Type":"ContainerStarted","Data":"d13905ba6f467608cc681e3e7697fad5602f3e0d378abac609e70d181d760328"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.940973 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.953360 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" event={"ID":"ea2bbc05-6497-4f5d-9681-a4f2d27e7aa7","Type":"ContainerStarted","Data":"a55dc4f33b20b043905ae25d35edf5d7ed205eb6604b4747e6c079fdaa615925"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.960337 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cdmgc" podStartSLOduration=128.960320303 podStartE2EDuration="2m8.960320303s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:03.959323457 +0000 UTC m=+153.803402520" watchObservedRunningTime="2026-01-22 09:13:03.960320303 +0000 UTC m=+153.804399366" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.969511 4892 patch_prober.go:28] interesting pod/downloads-7954f5f757-smrbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.969568 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-smrbs" podUID="d0142ef1-2eb1-43e9-99f3-e81a44383bd0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.969805 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" event={"ID":"0fbd9767-4b93-4496-b8e8-6a7d7f50df5e","Type":"ContainerStarted","Data":"8907b53dbce9db2f44af1944f57b0000f3f77854e364497d3d8d9b2b4c2630f8"} Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.970507 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:13:03 crc kubenswrapper[4892]: [-]has-synced failed: reason withheld Jan 22 09:13:03 crc kubenswrapper[4892]: [+]process-running ok Jan 22 09:13:03 crc kubenswrapper[4892]: healthz check failed Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.970539 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.973526 4892 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pprl2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.973560 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" podUID="0fbd9767-4b93-4496-b8e8-6a7d7f50df5e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 22 09:13:03 crc kubenswrapper[4892]: I0122 09:13:03.992394 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7q5c"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:03.999354 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.014350 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.514336152 +0000 UTC m=+154.358415215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.017211 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5sj6" podStartSLOduration=129.017198937 podStartE2EDuration="2m9.017198937s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.013261253 +0000 UTC m=+153.857340306" watchObservedRunningTime="2026-01-22 09:13:04.017198937 +0000 UTC m=+153.861278000" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.078114 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" event={"ID":"5fee1fa7-f83e-4be4-88f0-ed57f5f1d051","Type":"ContainerStarted","Data":"d0bfe0f79324a5f09fc0e444a1cadbadcfd9fa171525adfc8f3095853d994fd8"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.084074 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62vzg"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.096001 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5gnq" podStartSLOduration=129.095985086 podStartE2EDuration="2m9.095985086s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.057241999 +0000 UTC m=+153.901321062" watchObservedRunningTime="2026-01-22 09:13:04.095985086 +0000 UTC m=+153.940064149" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.098938 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 09:08:03 +0000 UTC, rotation deadline is 2026-10-12 03:34:55.215492205 +0000 UTC Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.098968 4892 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6306h21m51.11653194s for next certificate rotation Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.100718 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.100908 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.600894665 +0000 UTC m=+154.444973728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.100931 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-smrbs" podStartSLOduration=129.100729541 podStartE2EDuration="2m9.100729541s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.095815622 +0000 UTC m=+153.939894685" watchObservedRunningTime="2026-01-22 09:13:04.100729541 +0000 UTC m=+153.944808604" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.102248 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.109294 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.609264995 +0000 UTC m=+154.453344048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.129690 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" event={"ID":"1b7f981e-f1f9-44c3-9b6b-cbb8a2ba1605","Type":"ContainerStarted","Data":"1c2902bf8ac3b5783927834defd38139e0d4a6e31e56ac1bdf324c60ad8f5c08"} Jan 22 09:13:04 crc kubenswrapper[4892]: W0122 09:13:04.143628 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode984ad3d_befb_48a6_a5e3_597b3a8d4ff8.slice/crio-4aea12c815c9a0438bbc3e3ce365d23bf46068bd05e1336c52c33db9b5ac0efd WatchSource:0}: Error finding container 4aea12c815c9a0438bbc3e3ce365d23bf46068bd05e1336c52c33db9b5ac0efd: Status 404 returned error can't find the container with id 4aea12c815c9a0438bbc3e3ce365d23bf46068bd05e1336c52c33db9b5ac0efd Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.159675 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" event={"ID":"6a6e5907-552b-4dc7-884f-d766a773e8b0","Type":"ContainerStarted","Data":"155eaeaa524e23d4bea998d0f8c99a1691a9e6053058bb5ac7d673cac64173a3"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.181979 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" event={"ID":"e92cbfe3-753e-4263-b38b-05c6d89fc48c","Type":"ContainerStarted","Data":"8686fd19e068330e09a1939b78017547067e2cbfc33d027c3bb78f34128345a1"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.201879 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" event={"ID":"7a02064e-95a9-4e55-b21e-45868b0362f2","Type":"ContainerStarted","Data":"f412fb1e91c53c61794925e3a5a79942defd388fd92fa4f5e8e4bbccdb2638eb"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.202902 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.209692 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.210893 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.710876584 +0000 UTC m=+154.554955647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.212947 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" event={"ID":"665bbb60-4e79-4d6d-b805-1e03ef3442be","Type":"ContainerStarted","Data":"e6910428801437b8c1abb965a14945a3975ce56eb8446d136960b8603089a37d"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.215752 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-82b7j" podStartSLOduration=129.215736052 podStartE2EDuration="2m9.215736052s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.157637966 +0000 UTC m=+154.001717029" watchObservedRunningTime="2026-01-22 09:13:04.215736052 +0000 UTC m=+154.059815115" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.215834 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pjgc7" podStartSLOduration=129.215830394 podStartE2EDuration="2m9.215830394s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.203072999 +0000 UTC m=+154.047152062" watchObservedRunningTime="2026-01-22 09:13:04.215830394 +0000 UTC m=+154.059909457" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.250433 4892 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pkwbh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.250488 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" podUID="7a02064e-95a9-4e55-b21e-45868b0362f2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.257780 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" event={"ID":"f2895e12-d7e7-4eb4-8455-cae19e2347c2","Type":"ContainerStarted","Data":"76d09cc5b04587f3eb2c7fe080f03dbd83ff9d0a5499cf6c9f4cb4dcce78c58a"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.258404 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.269204 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x78kb" event={"ID":"0595af03-a344-4c6c-95ae-0d73f99237ba","Type":"ContainerStarted","Data":"e3ada605a8ce2f0ba73d9b1491dc26d414bcfe60d1b5f3209a14fdb36820046d"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.271149 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" event={"ID":"23125b22-0965-46a8-a698-dc256f032b3c","Type":"ContainerStarted","Data":"e83ae88bdf48e78496644989e5c857bfdbbcfb6cff66ae4b119dfddd29ddf772"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.291576 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" event={"ID":"909e154f-849c-4f69-96fb-74649b6db346","Type":"ContainerStarted","Data":"0bbfd66b6f83c1b7856314319aaa76c3826a8f8458c968292531b530053f8894"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.311160 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" event={"ID":"48ffec72-092a-4145-9136-d05df9fab68a","Type":"ContainerStarted","Data":"578e528a72ceaf1246e0f777850aad877af2d081637be27d1e50b05f5492999d"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.311472 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.324537 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.824516139 +0000 UTC m=+154.668595202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.336267 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" podStartSLOduration=129.336250168 podStartE2EDuration="2m9.336250168s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.335033836 +0000 UTC m=+154.179112899" watchObservedRunningTime="2026-01-22 09:13:04.336250168 +0000 UTC m=+154.180329221" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.338750 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" event={"ID":"ed66930a-e393-47ea-a98d-907a1327edac","Type":"ContainerStarted","Data":"293df7617b811dc5e5fdd76dc7c8cf2b244f5e91b7a2e5531112874b80550b29"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.366784 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" event={"ID":"82cc8d88-f16b-4fa6-90f3-c661418b5a12","Type":"ContainerStarted","Data":"45577b5598c4933a65243abd27287bbebd899e8fc1900e19231a76accd0daf72"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.401631 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" event={"ID":"3a01b910-5841-4f20-b270-c7040213ac8d","Type":"ContainerStarted","Data":"8ef2d5ded9cd75a37628fa30c3835196386a55134cc2e847afac9c73281e62d8"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.412830 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.412965 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.912944042 +0000 UTC m=+154.757023095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.413130 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.415487 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:04.915477469 +0000 UTC m=+154.759556532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.416468 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" event={"ID":"1949fb71-227b-4649-b4e2-62b3d7519dec","Type":"ContainerStarted","Data":"27220e994d133cf4fab5212424a9e93f64a9f887e5da37394fd2012746c99165"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.428922 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hk2bk" event={"ID":"3d0f0ae0-7c09-407c-b260-127ff7828c39","Type":"ContainerStarted","Data":"2b538280d42a62856aa5dad4f01791201e6e605eed04d8157a40746f5101151d"} Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.429093 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" podStartSLOduration=129.429074906 podStartE2EDuration="2m9.429074906s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.420730497 +0000 UTC m=+154.264809560" watchObservedRunningTime="2026-01-22 09:13:04.429074906 +0000 UTC m=+154.273153969" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.448727 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k75r9"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.486717 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.487169 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dgbdj"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.491662 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: W0122 09:13:04.499746 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode788dce1_96bb_4768_950f_08abe5d34305.slice/crio-91f07f6f38ec6ae40131e1f760032e35296fa861f1907cc815c08d6ac32fba8b WatchSource:0}: Error finding container 91f07f6f38ec6ae40131e1f760032e35296fa861f1907cc815c08d6ac32fba8b: Status 404 returned error can't find the container with id 91f07f6f38ec6ae40131e1f760032e35296fa861f1907cc815c08d6ac32fba8b Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.514394 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.514779 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-utilities\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.514885 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-catalog-content\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.515568 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2lhx\" (UniqueName: \"kubernetes.io/projected/b3591a30-3306-421b-a004-90ed127b1ac1-kube-api-access-t2lhx\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.516268 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.016243846 +0000 UTC m=+154.860322909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.520997 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.524548 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" podStartSLOduration=129.524507633 podStartE2EDuration="2m9.524507633s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.518799073 +0000 UTC m=+154.362878156" watchObservedRunningTime="2026-01-22 09:13:04.524507633 +0000 UTC m=+154.368586696" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.552365 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgbdj"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.609789 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bbzt8" podStartSLOduration=129.609769492 podStartE2EDuration="2m9.609769492s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.605663495 +0000 UTC m=+154.449742558" watchObservedRunningTime="2026-01-22 09:13:04.609769492 +0000 UTC m=+154.453848555" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.618854 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.618932 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2lhx\" (UniqueName: \"kubernetes.io/projected/b3591a30-3306-421b-a004-90ed127b1ac1-kube-api-access-t2lhx\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.619003 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-utilities\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.619045 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-catalog-content\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.621228 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.121212273 +0000 UTC m=+154.965291336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.630721 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-catalog-content\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.634656 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-utilities\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.716193 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2lhx\" (UniqueName: \"kubernetes.io/projected/b3591a30-3306-421b-a004-90ed127b1ac1-kube-api-access-t2lhx\") pod \"redhat-marketplace-dgbdj\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.719833 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.720120 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.220104291 +0000 UTC m=+155.064183354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.801935 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" podStartSLOduration=129.80191848 podStartE2EDuration="2m9.80191848s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.737663202 +0000 UTC m=+154.581742265" watchObservedRunningTime="2026-01-22 09:13:04.80191848 +0000 UTC m=+154.645997533" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.802456 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-x78kb" podStartSLOduration=8.802451704 podStartE2EDuration="8.802451704s" podCreationTimestamp="2026-01-22 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.801486488 +0000 UTC m=+154.645565551" watchObservedRunningTime="2026-01-22 09:13:04.802451704 +0000 UTC m=+154.646530767" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.824485 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.825165 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr8g"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.825575 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.825910 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.32589671 +0000 UTC m=+155.169975773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.826203 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.831156 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" podStartSLOduration=129.831141087 podStartE2EDuration="2m9.831141087s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.829200826 +0000 UTC m=+154.673279889" watchObservedRunningTime="2026-01-22 09:13:04.831141087 +0000 UTC m=+154.675220150" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.843894 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr8g"] Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.912788 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" podStartSLOduration=129.912765361 podStartE2EDuration="2m9.912765361s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.876035587 +0000 UTC m=+154.720114650" watchObservedRunningTime="2026-01-22 09:13:04.912765361 +0000 UTC m=+154.756844424" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.926980 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.927061 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-utilities\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.927094 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shg9w\" (UniqueName: \"kubernetes.io/projected/de808d2e-2d5a-458c-a3ca-a6475a9fde39-kube-api-access-shg9w\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.927199 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-catalog-content\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:04 crc kubenswrapper[4892]: E0122 09:13:04.927300 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.427273503 +0000 UTC m=+155.271352566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.941349 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" podStartSLOduration=129.941331712 podStartE2EDuration="2m9.941331712s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.914171708 +0000 UTC m=+154.758250771" watchObservedRunningTime="2026-01-22 09:13:04.941331712 +0000 UTC m=+154.785410775" Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.961197 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:13:04 crc kubenswrapper[4892]: [-]has-synced failed: reason withheld Jan 22 09:13:04 crc kubenswrapper[4892]: [+]process-running ok Jan 22 09:13:04 crc kubenswrapper[4892]: healthz check failed Jan 22 09:13:04 crc kubenswrapper[4892]: I0122 09:13:04.961255 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.008242 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wf6dw" podStartSLOduration=130.008225289 podStartE2EDuration="2m10.008225289s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:04.942917423 +0000 UTC m=+154.786996486" watchObservedRunningTime="2026-01-22 09:13:05.008225289 +0000 UTC m=+154.852304352" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.029959 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.030020 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-catalog-content\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.030044 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-utilities\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.030074 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shg9w\" (UniqueName: \"kubernetes.io/projected/de808d2e-2d5a-458c-a3ca-a6475a9fde39-kube-api-access-shg9w\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.030554 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.530544385 +0000 UTC m=+155.374623448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.031247 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-catalog-content\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.031479 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-utilities\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.069238 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shg9w\" (UniqueName: \"kubernetes.io/projected/de808d2e-2d5a-458c-a3ca-a6475a9fde39-kube-api-access-shg9w\") pod \"redhat-marketplace-4jr8g\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.131700 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.132108 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.632090863 +0000 UTC m=+155.476169926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.232995 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.233296 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.73327222 +0000 UTC m=+155.577351283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.256790 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.306778 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgbdj"] Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.346095 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.346396 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.846379231 +0000 UTC m=+155.690458294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.447552 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.448070 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:05.948059292 +0000 UTC m=+155.792138355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.480796 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" event={"ID":"e92cbfe3-753e-4263-b38b-05c6d89fc48c","Type":"ContainerStarted","Data":"1b83e6c97b516c5386e6951312c60a9fe4739f7a75ac324fd7eac02766523a91"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.480855 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" event={"ID":"e92cbfe3-753e-4263-b38b-05c6d89fc48c","Type":"ContainerStarted","Data":"23cb7f7b6471eaacab27741378178d8ef84dd6c9d70a4cacf2404fd535e8f32f"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.505600 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" event={"ID":"7a02064e-95a9-4e55-b21e-45868b0362f2","Type":"ContainerStarted","Data":"bc1fe76c7fada5645d0359d95955e8666b0f9e2e6c61d6222b162176875a6a1b"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.526576 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" event={"ID":"ed66930a-e393-47ea-a98d-907a1327edac","Type":"ContainerStarted","Data":"0dfbfebb32caf5b0cc2ba242c30948f599d210c872fb52fb6103c0d8f1ac2fd2"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.548502 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.548787 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.048771888 +0000 UTC m=+155.892850951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.554577 4892 generic.go:334] "Generic (PLEG): container finished" podID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerID="f7bd6c14f1560fb0ea3ee393e5d6d6289efa8b889182d6fb6d54d03e1ee6d97c" exitCode=0 Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.554661 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerDied","Data":"f7bd6c14f1560fb0ea3ee393e5d6d6289efa8b889182d6fb6d54d03e1ee6d97c"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.554687 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerStarted","Data":"4aea12c815c9a0438bbc3e3ce365d23bf46068bd05e1336c52c33db9b5ac0efd"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.570699 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.593659 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.595973 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2ds6" podStartSLOduration=130.595956757 podStartE2EDuration="2m10.595956757s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:05.524973173 +0000 UTC m=+155.369052236" watchObservedRunningTime="2026-01-22 09:13:05.595956757 +0000 UTC m=+155.440035810" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.598477 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgbdj" event={"ID":"b3591a30-3306-421b-a004-90ed127b1ac1","Type":"ContainerStarted","Data":"45087bb15ffe494efb75efec55a095c8c041b85314a3b9a338b6e7d9b22aa876"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.609796 4892 generic.go:334] "Generic (PLEG): container finished" podID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerID="e94eb76a3000d5a07046d0e6ca053fb8fdf4b49a0babc704cbd01c1af67832d3" exitCode=0 Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.609877 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerDied","Data":"e94eb76a3000d5a07046d0e6ca053fb8fdf4b49a0babc704cbd01c1af67832d3"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.609905 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerStarted","Data":"23b7ef1e7d19d338963b9f05b80d5fffb2692913e455c5c403d32fc289b57202"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.649771 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dd7gh"] Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.654507 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.658672 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.158658224 +0000 UTC m=+156.002737287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.662365 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.671963 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.677405 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dd7gh"] Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.701156 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr8g"] Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.760770 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.761381 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.261364102 +0000 UTC m=+156.105443165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.762231 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.762381 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-utilities\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.762556 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7rz\" (UniqueName: \"kubernetes.io/projected/63f7de77-6394-48d0-94f9-7b43fb7b715b-kube-api-access-8t7rz\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.762737 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-catalog-content\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.764143 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.264127035 +0000 UTC m=+156.108206098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.777650 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vw5wt" event={"ID":"909e154f-849c-4f69-96fb-74649b6db346","Type":"ContainerStarted","Data":"48155dedc8339404b000a110f8e990fb562e3298ebfa50e12ba9c884ca27ab2a"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.799974 4892 generic.go:334] "Generic (PLEG): container finished" podID="e788dce1-96bb-4768-950f-08abe5d34305" containerID="9041398a9898d05f4f3f000754d3f59aa7cf83cb89bc247de8f3aa1dadcabd5f" exitCode=0 Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.800226 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k75r9" event={"ID":"e788dce1-96bb-4768-950f-08abe5d34305","Type":"ContainerDied","Data":"9041398a9898d05f4f3f000754d3f59aa7cf83cb89bc247de8f3aa1dadcabd5f"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.800251 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k75r9" event={"ID":"e788dce1-96bb-4768-950f-08abe5d34305","Type":"ContainerStarted","Data":"91f07f6f38ec6ae40131e1f760032e35296fa861f1907cc815c08d6ac32fba8b"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.835658 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfs96" event={"ID":"82cc8d88-f16b-4fa6-90f3-c661418b5a12","Type":"ContainerStarted","Data":"005f3120be7df5c8dce8a811af5ce70eb29c42c6142fb89a48db1f21c238ed41"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.858470 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-86wr5" event={"ID":"23125b22-0965-46a8-a698-dc256f032b3c","Type":"ContainerStarted","Data":"893c891b339eb397b0aa03ad4c30a734ee906d093b363d420ae8e2c331830871"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.864320 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.864589 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-catalog-content\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.864739 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-utilities\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.864834 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7rz\" (UniqueName: \"kubernetes.io/projected/63f7de77-6394-48d0-94f9-7b43fb7b715b-kube-api-access-8t7rz\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.865329 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.365314073 +0000 UTC m=+156.209393136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.865716 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-catalog-content\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.865993 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-utilities\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.871568 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" event={"ID":"1949fb71-227b-4649-b4e2-62b3d7519dec","Type":"ContainerStarted","Data":"34fbaaeb7c767ea5841d2227e67bb2b11eb779f0f73476b5568201d9c9beec66"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.884325 4892 generic.go:334] "Generic (PLEG): container finished" podID="a1422496-2b87-44ce-b710-32e8de483ebd" containerID="ddb646b53b6ab0142db5173cdddaa817fe8d139e595aa7dbc7cdd244ac1ef93f" exitCode=0 Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.884408 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7q5c" event={"ID":"a1422496-2b87-44ce-b710-32e8de483ebd","Type":"ContainerDied","Data":"ddb646b53b6ab0142db5173cdddaa817fe8d139e595aa7dbc7cdd244ac1ef93f"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.884433 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7q5c" event={"ID":"a1422496-2b87-44ce-b710-32e8de483ebd","Type":"ContainerStarted","Data":"341f9cf1e505248470c332898eccdea5540e5d241e961f8e9ca208f76a718db1"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.890692 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7rz\" (UniqueName: \"kubernetes.io/projected/63f7de77-6394-48d0-94f9-7b43fb7b715b-kube-api-access-8t7rz\") pod \"redhat-operators-dd7gh\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.893124 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hk2bk" event={"ID":"3d0f0ae0-7c09-407c-b260-127ff7828c39","Type":"ContainerStarted","Data":"6f7c9ce2b0be8c9cd1181aae27fb69a147395d76665409c81e7082a93cd88db9"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.893160 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hk2bk" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.893182 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hk2bk" event={"ID":"3d0f0ae0-7c09-407c-b260-127ff7828c39","Type":"ContainerStarted","Data":"d041a7b86c8ee29620f2a3075c57faf08fd4f811765261a682ec94d8efe23a18"} Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.899267 4892 patch_prober.go:28] interesting pod/downloads-7954f5f757-smrbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.899321 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-smrbs" podUID="d0142ef1-2eb1-43e9-99f3-e81a44383bd0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.910649 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pprl2" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.946464 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hk2bk" podStartSLOduration=9.946449614 podStartE2EDuration="9.946449614s" podCreationTimestamp="2026-01-22 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:05.942699996 +0000 UTC m=+155.786779079" watchObservedRunningTime="2026-01-22 09:13:05.946449614 +0000 UTC m=+155.790528677" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.964630 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:13:05 crc kubenswrapper[4892]: [-]has-synced failed: reason withheld Jan 22 09:13:05 crc kubenswrapper[4892]: [+]process-running ok Jan 22 09:13:05 crc kubenswrapper[4892]: healthz check failed Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.965000 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:13:05 crc kubenswrapper[4892]: I0122 09:13:05.966982 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:05 crc kubenswrapper[4892]: E0122 09:13:05.969655 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.469638233 +0000 UTC m=+156.313717296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.002565 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.022481 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5djk8"] Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.023777 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.030923 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5djk8"] Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.072846 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.076649 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.576633084 +0000 UTC m=+156.420712147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.077798 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.078898 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.578883003 +0000 UTC m=+156.422962116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.187740 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.187960 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpbv\" (UniqueName: \"kubernetes.io/projected/bbbc6b45-55e0-4f41-b54e-ee1fda017554-kube-api-access-fxpbv\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.188007 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-utilities\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.188044 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-catalog-content\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.189904 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.689881379 +0000 UTC m=+156.533960442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.292697 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpbv\" (UniqueName: \"kubernetes.io/projected/bbbc6b45-55e0-4f41-b54e-ee1fda017554-kube-api-access-fxpbv\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.293006 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.293035 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-utilities\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.293073 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-catalog-content\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.293484 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-catalog-content\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.293944 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.793932341 +0000 UTC m=+156.638011404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.294269 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-utilities\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.337111 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpbv\" (UniqueName: \"kubernetes.io/projected/bbbc6b45-55e0-4f41-b54e-ee1fda017554-kube-api-access-fxpbv\") pod \"redhat-operators-5djk8\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.379595 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.393901 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.394218 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.894201535 +0000 UTC m=+156.738280588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.496038 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.496457 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:06.996442931 +0000 UTC m=+156.840521994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.508622 4892 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pkwbh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.508663 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" podUID="7a02064e-95a9-4e55-b21e-45868b0362f2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.521449 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dd7gh"] Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.596899 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.597203 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.097187767 +0000 UTC m=+156.941266830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.709691 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.710327 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.210315819 +0000 UTC m=+157.054394882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.793385 4892 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.811811 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.812110 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.312078752 +0000 UTC m=+157.156157815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.862232 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5djk8"] Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.913016 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:06 crc kubenswrapper[4892]: E0122 09:13:06.913331 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.413316721 +0000 UTC m=+157.257395784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.931788 4892 generic.go:334] "Generic (PLEG): container finished" podID="b3591a30-3306-421b-a004-90ed127b1ac1" containerID="d4931b0d49bb33b5a09fdf2d7ac88996ee9038f39883e192a8043ceaf5bee4ab" exitCode=0 Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.932126 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgbdj" event={"ID":"b3591a30-3306-421b-a004-90ed127b1ac1","Type":"ContainerDied","Data":"d4931b0d49bb33b5a09fdf2d7ac88996ee9038f39883e192a8043ceaf5bee4ab"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.942057 4892 generic.go:334] "Generic (PLEG): container finished" podID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerID="ee9be3424a622800fef31fe6f7794ced466a0de082e7cffa5e8a05bb4681cb0e" exitCode=0 Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.942147 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerDied","Data":"ee9be3424a622800fef31fe6f7794ced466a0de082e7cffa5e8a05bb4681cb0e"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.942195 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerStarted","Data":"0f98dcf222f60babd58a7443929db6f2c089ae153d41dfdf52ec383cd095a5ec"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.946468 4892 generic.go:334] "Generic (PLEG): container finished" podID="665bbb60-4e79-4d6d-b805-1e03ef3442be" containerID="e6910428801437b8c1abb965a14945a3975ce56eb8446d136960b8603089a37d" exitCode=0 Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.946539 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" event={"ID":"665bbb60-4e79-4d6d-b805-1e03ef3442be","Type":"ContainerDied","Data":"e6910428801437b8c1abb965a14945a3975ce56eb8446d136960b8603089a37d"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.957014 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:13:06 crc kubenswrapper[4892]: [-]has-synced failed: reason withheld Jan 22 09:13:06 crc kubenswrapper[4892]: [+]process-running ok Jan 22 09:13:06 crc kubenswrapper[4892]: healthz check failed Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.957058 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.957782 4892 generic.go:334] "Generic (PLEG): container finished" podID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerID="637f609bf97f5506b83ac1743f17fd43d2f2ee072385a0e96b9081001763dc23" exitCode=0 Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.957842 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr8g" event={"ID":"de808d2e-2d5a-458c-a3ca-a6475a9fde39","Type":"ContainerDied","Data":"637f609bf97f5506b83ac1743f17fd43d2f2ee072385a0e96b9081001763dc23"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.957865 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr8g" event={"ID":"de808d2e-2d5a-458c-a3ca-a6475a9fde39","Type":"ContainerStarted","Data":"bc6cfdd533d3c8d6a6269cc735847b0c16d804f3ad7ede0303f2fe156662d4c0"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.967503 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" event={"ID":"1949fb71-227b-4649-b4e2-62b3d7519dec","Type":"ContainerStarted","Data":"bceb787022d67fad2f2a135faff54a6e88936ec185992ae6bd50e41f11d06304"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.967569 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" event={"ID":"1949fb71-227b-4649-b4e2-62b3d7519dec","Type":"ContainerStarted","Data":"b13db1c9a83ac35576eb50cd5a069f2a99eecd178887ed9a2bdd792111b5e646"} Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.973180 4892 patch_prober.go:28] interesting pod/downloads-7954f5f757-smrbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.973238 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-smrbs" podUID="d0142ef1-2eb1-43e9-99f3-e81a44383bd0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.978553 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kw2w9" Jan 22 09:13:06 crc kubenswrapper[4892]: I0122 09:13:06.979081 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pkwbh" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.016433 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.018131 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.518111624 +0000 UTC m=+157.362190687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.118813 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.119103 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.619091146 +0000 UTC m=+157.463170209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.219642 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.219967 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.719952176 +0000 UTC m=+157.564031239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.321253 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.321651 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.821638607 +0000 UTC m=+157.665717670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.422549 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.422694 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.922669691 +0000 UTC m=+157.766752754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.423453 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.423777 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:07.923769359 +0000 UTC m=+157.767848422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.524776 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.524915 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:13:08.024895616 +0000 UTC m=+157.868974679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.525002 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: E0122 09:13:07.525297 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:13:08.025275216 +0000 UTC m=+157.869354279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vbm7b" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.613108 4892 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T09:13:06.793405311Z","Handler":null,"Name":""} Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.618569 4892 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.618599 4892 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.626681 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.637336 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.728540 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.731272 4892 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.731312 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.754174 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vbm7b\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.765594 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.797074 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.797855 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.799831 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.800254 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.806452 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.829727 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.829801 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.931279 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.931381 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.931464 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.956836 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:13:07 crc kubenswrapper[4892]: [-]has-synced failed: reason withheld Jan 22 09:13:07 crc kubenswrapper[4892]: [+]process-running ok Jan 22 09:13:07 crc kubenswrapper[4892]: healthz check failed Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.957042 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.965542 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.979468 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" event={"ID":"1949fb71-227b-4649-b4e2-62b3d7519dec","Type":"ContainerStarted","Data":"d722a1b59c32bd06be2a6b2e174cf9e356098f8d5a3ee7d6f64f64c8fda7baf5"} Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.984948 4892 generic.go:334] "Generic (PLEG): container finished" podID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerID="06a200eea161e5863e695f6b32538db87fadb8b5a5c85067081c46c1f5fa2654" exitCode=0 Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.985076 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerDied","Data":"06a200eea161e5863e695f6b32538db87fadb8b5a5c85067081c46c1f5fa2654"} Jan 22 09:13:07 crc kubenswrapper[4892]: I0122 09:13:07.985120 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerStarted","Data":"903c40188b9cbf272f8f9d7eccb475301b69157f62503cd0dffa52154036ce86"} Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.013625 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rhbtn" podStartSLOduration=12.013601883 podStartE2EDuration="12.013601883s" podCreationTimestamp="2026-01-22 09:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:08.003357784 +0000 UTC m=+157.847436847" watchObservedRunningTime="2026-01-22 09:13:08.013601883 +0000 UTC m=+157.857680946" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.118891 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.196621 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.235239 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/665bbb60-4e79-4d6d-b805-1e03ef3442be-config-volume\") pod \"665bbb60-4e79-4d6d-b805-1e03ef3442be\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.235314 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/665bbb60-4e79-4d6d-b805-1e03ef3442be-secret-volume\") pod \"665bbb60-4e79-4d6d-b805-1e03ef3442be\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.235372 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lk47\" (UniqueName: \"kubernetes.io/projected/665bbb60-4e79-4d6d-b805-1e03ef3442be-kube-api-access-2lk47\") pod \"665bbb60-4e79-4d6d-b805-1e03ef3442be\" (UID: \"665bbb60-4e79-4d6d-b805-1e03ef3442be\") " Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.236597 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/665bbb60-4e79-4d6d-b805-1e03ef3442be-config-volume" (OuterVolumeSpecName: "config-volume") pod "665bbb60-4e79-4d6d-b805-1e03ef3442be" (UID: "665bbb60-4e79-4d6d-b805-1e03ef3442be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.239245 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665bbb60-4e79-4d6d-b805-1e03ef3442be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "665bbb60-4e79-4d6d-b805-1e03ef3442be" (UID: "665bbb60-4e79-4d6d-b805-1e03ef3442be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.239837 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665bbb60-4e79-4d6d-b805-1e03ef3442be-kube-api-access-2lk47" (OuterVolumeSpecName: "kube-api-access-2lk47") pod "665bbb60-4e79-4d6d-b805-1e03ef3442be" (UID: "665bbb60-4e79-4d6d-b805-1e03ef3442be"). InnerVolumeSpecName "kube-api-access-2lk47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.286126 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vbm7b"] Jan 22 09:13:08 crc kubenswrapper[4892]: W0122 09:13:08.329608 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5106e58c_1823_4cf9_8a5b_5fc9a001e8a7.slice/crio-d4a9a1b5bf58864af40df8bdad75f9c8aebeb12575d87dcc89d36a260a2db528 WatchSource:0}: Error finding container d4a9a1b5bf58864af40df8bdad75f9c8aebeb12575d87dcc89d36a260a2db528: Status 404 returned error can't find the container with id d4a9a1b5bf58864af40df8bdad75f9c8aebeb12575d87dcc89d36a260a2db528 Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.337341 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lk47\" (UniqueName: \"kubernetes.io/projected/665bbb60-4e79-4d6d-b805-1e03ef3442be-kube-api-access-2lk47\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.337413 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/665bbb60-4e79-4d6d-b805-1e03ef3442be-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.337424 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/665bbb60-4e79-4d6d-b805-1e03ef3442be-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.477982 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.478737 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.484940 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.542299 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.571570 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.571606 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.579322 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.956672 4892 patch_prober.go:28] interesting pod/router-default-5444994796-bknqb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:13:08 crc kubenswrapper[4892]: [-]has-synced failed: reason withheld Jan 22 09:13:08 crc kubenswrapper[4892]: [+]process-running ok Jan 22 09:13:08 crc kubenswrapper[4892]: healthz check failed Jan 22 09:13:08 crc kubenswrapper[4892]: I0122 09:13:08.957146 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bknqb" podUID="1f9ad9cf-6a75-4df9-bb2e-e1142a897bf2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.006392 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" event={"ID":"665bbb60-4e79-4d6d-b805-1e03ef3442be","Type":"ContainerDied","Data":"50e401fecbd113809c531c596cc8173a5ada786527b3f8311ad28336a2a8b5e1"} Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.006423 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e401fecbd113809c531c596cc8173a5ada786527b3f8311ad28336a2a8b5e1" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.006477 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.020708 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" event={"ID":"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7","Type":"ContainerStarted","Data":"75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6"} Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.020779 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.020794 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" event={"ID":"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7","Type":"ContainerStarted","Data":"d4a9a1b5bf58864af40df8bdad75f9c8aebeb12575d87dcc89d36a260a2db528"} Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.022668 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224","Type":"ContainerStarted","Data":"d8158bdea9890377fa30ba2be17fd0d374bb49a39bc848af0374b0f5e9d6330e"} Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.028657 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lchcq" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.033881 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-zgz9g" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.098646 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" podStartSLOduration=134.098624164 podStartE2EDuration="2m14.098624164s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:09.044810221 +0000 UTC m=+158.888889284" watchObservedRunningTime="2026-01-22 09:13:09.098624164 +0000 UTC m=+158.942703227" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.430859 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.530424 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.530468 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.536513 4892 patch_prober.go:28] interesting pod/console-f9d7485db-95xtn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.536569 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-95xtn" podUID="ab72073f-69cb-4719-b896-54618a6925db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.953651 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.956644 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.975033 4892 patch_prober.go:28] interesting pod/downloads-7954f5f757-smrbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.975068 4892 patch_prober.go:28] interesting pod/downloads-7954f5f757-smrbs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.975083 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-smrbs" podUID="d0142ef1-2eb1-43e9-99f3-e81a44383bd0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 09:13:09 crc kubenswrapper[4892]: I0122 09:13:09.975120 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-smrbs" podUID="d0142ef1-2eb1-43e9-99f3-e81a44383bd0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 09:13:10 crc kubenswrapper[4892]: I0122 09:13:10.047925 4892 generic.go:334] "Generic (PLEG): container finished" podID="ab9ba231-7bcd-46d5-8b79-b0aa83a7d224" containerID="1576f2eae08701a0b9a2a05a2eac4e76874b4a047555d5919d0a99add494d59e" exitCode=0 Jan 22 09:13:10 crc kubenswrapper[4892]: I0122 09:13:10.048004 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224","Type":"ContainerDied","Data":"1576f2eae08701a0b9a2a05a2eac4e76874b4a047555d5919d0a99add494d59e"} Jan 22 09:13:10 crc kubenswrapper[4892]: I0122 09:13:10.051329 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-bknqb" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.328016 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.423613 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kubelet-dir\") pod \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.423710 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kube-api-access\") pod \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\" (UID: \"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224\") " Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.424319 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab9ba231-7bcd-46d5-8b79-b0aa83a7d224" (UID: "ab9ba231-7bcd-46d5-8b79-b0aa83a7d224"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.445084 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab9ba231-7bcd-46d5-8b79-b0aa83a7d224" (UID: "ab9ba231-7bcd-46d5-8b79-b0aa83a7d224"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.515931 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 09:13:11 crc kubenswrapper[4892]: E0122 09:13:11.516148 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9ba231-7bcd-46d5-8b79-b0aa83a7d224" containerName="pruner" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.516161 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9ba231-7bcd-46d5-8b79-b0aa83a7d224" containerName="pruner" Jan 22 09:13:11 crc kubenswrapper[4892]: E0122 09:13:11.516171 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665bbb60-4e79-4d6d-b805-1e03ef3442be" containerName="collect-profiles" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.516178 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="665bbb60-4e79-4d6d-b805-1e03ef3442be" containerName="collect-profiles" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.516297 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab9ba231-7bcd-46d5-8b79-b0aa83a7d224" containerName="pruner" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.516319 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="665bbb60-4e79-4d6d-b805-1e03ef3442be" containerName="collect-profiles" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.516761 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.518717 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.519002 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.525925 4892 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.525951 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab9ba231-7bcd-46d5-8b79-b0aa83a7d224-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.527743 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.626553 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79900f98-8939-4382-90a5-ae0696db4b70-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.626972 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79900f98-8939-4382-90a5-ae0696db4b70-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.727925 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79900f98-8939-4382-90a5-ae0696db4b70-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.728025 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79900f98-8939-4382-90a5-ae0696db4b70-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.728107 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79900f98-8939-4382-90a5-ae0696db4b70-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.745723 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79900f98-8939-4382-90a5-ae0696db4b70-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:11 crc kubenswrapper[4892]: I0122 09:13:11.841777 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:12 crc kubenswrapper[4892]: I0122 09:13:12.070433 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ab9ba231-7bcd-46d5-8b79-b0aa83a7d224","Type":"ContainerDied","Data":"d8158bdea9890377fa30ba2be17fd0d374bb49a39bc848af0374b0f5e9d6330e"} Jan 22 09:13:12 crc kubenswrapper[4892]: I0122 09:13:12.070472 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8158bdea9890377fa30ba2be17fd0d374bb49a39bc848af0374b0f5e9d6330e" Jan 22 09:13:12 crc kubenswrapper[4892]: I0122 09:13:12.070523 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:13:12 crc kubenswrapper[4892]: I0122 09:13:12.087114 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 09:13:12 crc kubenswrapper[4892]: W0122 09:13:12.113486 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod79900f98_8939_4382_90a5_ae0696db4b70.slice/crio-989b2c6552b3cc5a75cbc6f268f9e5c6f466a07dbeb515c4fb698e97c85a6aaf WatchSource:0}: Error finding container 989b2c6552b3cc5a75cbc6f268f9e5c6f466a07dbeb515c4fb698e97c85a6aaf: Status 404 returned error can't find the container with id 989b2c6552b3cc5a75cbc6f268f9e5c6f466a07dbeb515c4fb698e97c85a6aaf Jan 22 09:13:13 crc kubenswrapper[4892]: I0122 09:13:13.083119 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"79900f98-8939-4382-90a5-ae0696db4b70","Type":"ContainerStarted","Data":"578c09e8b759c3700c34257c52242514f9e0e0ae3c6702a5860efcd06c37264e"} Jan 22 09:13:13 crc kubenswrapper[4892]: I0122 09:13:13.083707 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"79900f98-8939-4382-90a5-ae0696db4b70","Type":"ContainerStarted","Data":"989b2c6552b3cc5a75cbc6f268f9e5c6f466a07dbeb515c4fb698e97c85a6aaf"} Jan 22 09:13:14 crc kubenswrapper[4892]: I0122 09:13:14.098843 4892 generic.go:334] "Generic (PLEG): container finished" podID="79900f98-8939-4382-90a5-ae0696db4b70" containerID="578c09e8b759c3700c34257c52242514f9e0e0ae3c6702a5860efcd06c37264e" exitCode=0 Jan 22 09:13:14 crc kubenswrapper[4892]: I0122 09:13:14.098911 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"79900f98-8939-4382-90a5-ae0696db4b70","Type":"ContainerDied","Data":"578c09e8b759c3700c34257c52242514f9e0e0ae3c6702a5860efcd06c37264e"} Jan 22 09:13:15 crc kubenswrapper[4892]: I0122 09:13:15.000584 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hk2bk" Jan 22 09:13:16 crc kubenswrapper[4892]: I0122 09:13:16.323521 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:13:16 crc kubenswrapper[4892]: I0122 09:13:16.323909 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:13:17 crc kubenswrapper[4892]: I0122 09:13:17.625647 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:13:17 crc kubenswrapper[4892]: I0122 09:13:17.641968 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7391f43-09a9-4333-8df2-72d4fdc02615-metrics-certs\") pod \"network-metrics-daemon-5nnld\" (UID: \"f7391f43-09a9-4333-8df2-72d4fdc02615\") " pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:13:17 crc kubenswrapper[4892]: I0122 09:13:17.731009 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5nnld" Jan 22 09:13:19 crc kubenswrapper[4892]: I0122 09:13:19.534046 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:13:19 crc kubenswrapper[4892]: I0122 09:13:19.543204 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:13:19 crc kubenswrapper[4892]: I0122 09:13:19.983246 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-smrbs" Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.676415 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.797294 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79900f98-8939-4382-90a5-ae0696db4b70-kubelet-dir\") pod \"79900f98-8939-4382-90a5-ae0696db4b70\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.797384 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79900f98-8939-4382-90a5-ae0696db4b70-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "79900f98-8939-4382-90a5-ae0696db4b70" (UID: "79900f98-8939-4382-90a5-ae0696db4b70"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.797433 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79900f98-8939-4382-90a5-ae0696db4b70-kube-api-access\") pod \"79900f98-8939-4382-90a5-ae0696db4b70\" (UID: \"79900f98-8939-4382-90a5-ae0696db4b70\") " Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.797649 4892 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79900f98-8939-4382-90a5-ae0696db4b70-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.801568 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79900f98-8939-4382-90a5-ae0696db4b70-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "79900f98-8939-4382-90a5-ae0696db4b70" (UID: "79900f98-8939-4382-90a5-ae0696db4b70"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:13:20 crc kubenswrapper[4892]: I0122 09:13:20.898738 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79900f98-8939-4382-90a5-ae0696db4b70-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:21 crc kubenswrapper[4892]: I0122 09:13:21.149232 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"79900f98-8939-4382-90a5-ae0696db4b70","Type":"ContainerDied","Data":"989b2c6552b3cc5a75cbc6f268f9e5c6f466a07dbeb515c4fb698e97c85a6aaf"} Jan 22 09:13:21 crc kubenswrapper[4892]: I0122 09:13:21.149274 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989b2c6552b3cc5a75cbc6f268f9e5c6f466a07dbeb515c4fb698e97c85a6aaf" Jan 22 09:13:21 crc kubenswrapper[4892]: I0122 09:13:21.149298 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:13:27 crc kubenswrapper[4892]: I0122 09:13:27.774496 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:13:38 crc kubenswrapper[4892]: I0122 09:13:38.343839 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:13:38 crc kubenswrapper[4892]: I0122 09:13:38.547796 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ctztv" Jan 22 09:13:40 crc kubenswrapper[4892]: I0122 09:13:40.539487 4892 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-st5q7 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:13:40 crc kubenswrapper[4892]: I0122 09:13:40.539552 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-st5q7" podUID="c8d4a47e-b68c-428e-9e69-11b1040dd23e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:13:44 crc kubenswrapper[4892]: E0122 09:13:44.588255 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 09:13:44 crc kubenswrapper[4892]: E0122 09:13:44.588707 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd8x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-62vzg_openshift-marketplace(e984ad3d-befb-48a6-a5e3-597b3a8d4ff8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:44 crc kubenswrapper[4892]: E0122 09:13:44.589902 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-62vzg" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" Jan 22 09:13:46 crc kubenswrapper[4892]: I0122 09:13:46.323352 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:13:46 crc kubenswrapper[4892]: I0122 09:13:46.323719 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:13:46 crc kubenswrapper[4892]: E0122 09:13:46.734769 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 09:13:46 crc kubenswrapper[4892]: E0122 09:13:46.734932 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shg9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4jr8g_openshift-marketplace(de808d2e-2d5a-458c-a3ca-a6475a9fde39): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:46 crc kubenswrapper[4892]: E0122 09:13:46.736111 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4jr8g" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" Jan 22 09:13:49 crc kubenswrapper[4892]: E0122 09:13:49.063120 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-62vzg" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" Jan 22 09:13:49 crc kubenswrapper[4892]: E0122 09:13:49.063707 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4jr8g" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.517441 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 09:13:49 crc kubenswrapper[4892]: E0122 09:13:49.517687 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79900f98-8939-4382-90a5-ae0696db4b70" containerName="pruner" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.517699 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="79900f98-8939-4382-90a5-ae0696db4b70" containerName="pruner" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.517791 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="79900f98-8939-4382-90a5-ae0696db4b70" containerName="pruner" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.518146 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.529456 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.529620 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.535997 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.604511 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.605139 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.705587 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.705713 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.705844 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.728907 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:49 crc kubenswrapper[4892]: I0122 09:13:49.867480 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:52 crc kubenswrapper[4892]: E0122 09:13:52.316176 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 09:13:52 crc kubenswrapper[4892]: E0122 09:13:52.316649 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qd8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w7q5c_openshift-marketplace(a1422496-2b87-44ce-b710-32e8de483ebd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:52 crc kubenswrapper[4892]: E0122 09:13:52.317884 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w7q5c" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.709233 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.710039 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.715400 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.750701 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-var-lock\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.750745 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kube-api-access\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.750768 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kubelet-dir\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.851868 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-var-lock\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.851934 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kube-api-access\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.851961 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-var-lock\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.851979 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kubelet-dir\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.852159 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kubelet-dir\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:53 crc kubenswrapper[4892]: I0122 09:13:53.868517 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kube-api-access\") pod \"installer-9-crc\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:54 crc kubenswrapper[4892]: I0122 09:13:54.033987 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:13:55 crc kubenswrapper[4892]: E0122 09:13:55.019619 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 09:13:55 crc kubenswrapper[4892]: E0122 09:13:55.019773 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qh65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k75r9_openshift-marketplace(e788dce1-96bb-4768-950f-08abe5d34305): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:55 crc kubenswrapper[4892]: E0122 09:13:55.021697 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k75r9" podUID="e788dce1-96bb-4768-950f-08abe5d34305" Jan 22 09:13:55 crc kubenswrapper[4892]: E0122 09:13:55.484706 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k75r9" podUID="e788dce1-96bb-4768-950f-08abe5d34305" Jan 22 09:13:55 crc kubenswrapper[4892]: E0122 09:13:55.484738 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w7q5c" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" Jan 22 09:13:55 crc kubenswrapper[4892]: I0122 09:13:55.914215 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 09:13:55 crc kubenswrapper[4892]: W0122 09:13:55.921442 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod44ffdd66_4baa_46c3_9441_4f46b6c0c835.slice/crio-6bcdf144206f9c7a99a341c876aa50b70e60247cd092a3cd1c6aad6fc485c2db WatchSource:0}: Error finding container 6bcdf144206f9c7a99a341c876aa50b70e60247cd092a3cd1c6aad6fc485c2db: Status 404 returned error can't find the container with id 6bcdf144206f9c7a99a341c876aa50b70e60247cd092a3cd1c6aad6fc485c2db Jan 22 09:13:55 crc kubenswrapper[4892]: W0122 09:13:55.951683 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7391f43_09a9_4333_8df2_72d4fdc02615.slice/crio-e1dc247ca57b00886339b9911a567e09d576923029d4ca5340e78a15c2298db9 WatchSource:0}: Error finding container e1dc247ca57b00886339b9911a567e09d576923029d4ca5340e78a15c2298db9: Status 404 returned error can't find the container with id e1dc247ca57b00886339b9911a567e09d576923029d4ca5340e78a15c2298db9 Jan 22 09:13:55 crc kubenswrapper[4892]: I0122 09:13:55.952384 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5nnld"] Jan 22 09:13:55 crc kubenswrapper[4892]: I0122 09:13:55.958225 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 09:13:56 crc kubenswrapper[4892]: W0122 09:13:56.041430 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14dc7f46_f078_43dc_95c7_6d33fdb63ab7.slice/crio-789cd37c5b2fbb1b83050fe1ee12ce76368a252122edd95b718812356f09d627 WatchSource:0}: Error finding container 789cd37c5b2fbb1b83050fe1ee12ce76368a252122edd95b718812356f09d627: Status 404 returned error can't find the container with id 789cd37c5b2fbb1b83050fe1ee12ce76368a252122edd95b718812356f09d627 Jan 22 09:13:56 crc kubenswrapper[4892]: I0122 09:13:56.056304 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"44ffdd66-4baa-46c3-9441-4f46b6c0c835","Type":"ContainerStarted","Data":"6bcdf144206f9c7a99a341c876aa50b70e60247cd092a3cd1c6aad6fc485c2db"} Jan 22 09:13:56 crc kubenswrapper[4892]: I0122 09:13:56.057631 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14dc7f46-f078-43dc-95c7-6d33fdb63ab7","Type":"ContainerStarted","Data":"789cd37c5b2fbb1b83050fe1ee12ce76368a252122edd95b718812356f09d627"} Jan 22 09:13:56 crc kubenswrapper[4892]: I0122 09:13:56.062521 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5nnld" event={"ID":"f7391f43-09a9-4333-8df2-72d4fdc02615","Type":"ContainerStarted","Data":"e1dc247ca57b00886339b9911a567e09d576923029d4ca5340e78a15c2298db9"} Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.244713 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.245474 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dd7gh_openshift-marketplace(63f7de77-6394-48d0-94f9-7b43fb7b715b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.246703 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dd7gh" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.298549 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.298754 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2lhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dgbdj_openshift-marketplace(b3591a30-3306-421b-a004-90ed127b1ac1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.299920 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dgbdj" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.406317 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.406703 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxpbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5djk8_openshift-marketplace(bbbc6b45-55e0-4f41-b54e-ee1fda017554): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.407953 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5djk8" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.488615 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.488791 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnh7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-99vdf_openshift-marketplace(0b80f630-aab1-47ce-87ba-f8f753f5664a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:13:57 crc kubenswrapper[4892]: E0122 09:13:57.489897 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-99vdf" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.074655 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5nnld" event={"ID":"f7391f43-09a9-4333-8df2-72d4fdc02615","Type":"ContainerStarted","Data":"ef1ba19e432b537daedf4f85f81b1abcd04fa4de2e20b240e8e8896c185e5851"} Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.074707 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5nnld" event={"ID":"f7391f43-09a9-4333-8df2-72d4fdc02615","Type":"ContainerStarted","Data":"31815ba67c27a0f554cd6e40b21807c0f4f1ad68b1d93fa3a127de38ee1d3dbf"} Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.077061 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"44ffdd66-4baa-46c3-9441-4f46b6c0c835","Type":"ContainerStarted","Data":"9ad58a3ce8e27d6c33100b310b6534fd549158b157dc78ca359ef0bfd99fac3f"} Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.080098 4892 generic.go:334] "Generic (PLEG): container finished" podID="14dc7f46-f078-43dc-95c7-6d33fdb63ab7" containerID="78b0b1b98473db0293f78fd04686ed123630850f0dacaac18ec3d0173b6a4dbc" exitCode=0 Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.080775 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14dc7f46-f078-43dc-95c7-6d33fdb63ab7","Type":"ContainerDied","Data":"78b0b1b98473db0293f78fd04686ed123630850f0dacaac18ec3d0173b6a4dbc"} Jan 22 09:13:58 crc kubenswrapper[4892]: E0122 09:13:58.081652 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-99vdf" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" Jan 22 09:13:58 crc kubenswrapper[4892]: E0122 09:13:58.082637 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dgbdj" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" Jan 22 09:13:58 crc kubenswrapper[4892]: E0122 09:13:58.082697 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5djk8" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" Jan 22 09:13:58 crc kubenswrapper[4892]: E0122 09:13:58.082742 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dd7gh" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.097037 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5nnld" podStartSLOduration=183.09701922 podStartE2EDuration="3m3.09701922s" podCreationTimestamp="2026-01-22 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:58.089455491 +0000 UTC m=+207.933534564" watchObservedRunningTime="2026-01-22 09:13:58.09701922 +0000 UTC m=+207.941098283" Jan 22 09:13:58 crc kubenswrapper[4892]: I0122 09:13:58.156505 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.156484804 podStartE2EDuration="5.156484804s" podCreationTimestamp="2026-01-22 09:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:13:58.152181371 +0000 UTC m=+207.996260434" watchObservedRunningTime="2026-01-22 09:13:58.156484804 +0000 UTC m=+208.000563867" Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.359056 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.525848 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kube-api-access\") pod \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.525905 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kubelet-dir\") pod \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\" (UID: \"14dc7f46-f078-43dc-95c7-6d33fdb63ab7\") " Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.526076 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14dc7f46-f078-43dc-95c7-6d33fdb63ab7" (UID: "14dc7f46-f078-43dc-95c7-6d33fdb63ab7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.526347 4892 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.534121 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14dc7f46-f078-43dc-95c7-6d33fdb63ab7" (UID: "14dc7f46-f078-43dc-95c7-6d33fdb63ab7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:13:59 crc kubenswrapper[4892]: I0122 09:13:59.628397 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14dc7f46-f078-43dc-95c7-6d33fdb63ab7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:00 crc kubenswrapper[4892]: I0122 09:14:00.096675 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14dc7f46-f078-43dc-95c7-6d33fdb63ab7","Type":"ContainerDied","Data":"789cd37c5b2fbb1b83050fe1ee12ce76368a252122edd95b718812356f09d627"} Jan 22 09:14:00 crc kubenswrapper[4892]: I0122 09:14:00.097083 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="789cd37c5b2fbb1b83050fe1ee12ce76368a252122edd95b718812356f09d627" Jan 22 09:14:00 crc kubenswrapper[4892]: I0122 09:14:00.096731 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:14:01 crc kubenswrapper[4892]: I0122 09:14:01.104452 4892 generic.go:334] "Generic (PLEG): container finished" podID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerID="6ec254b8073080f3b9bce90c846a3f8328c34fa2647246d1db085ab34a423d54" exitCode=0 Jan 22 09:14:01 crc kubenswrapper[4892]: I0122 09:14:01.104625 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr8g" event={"ID":"de808d2e-2d5a-458c-a3ca-a6475a9fde39","Type":"ContainerDied","Data":"6ec254b8073080f3b9bce90c846a3f8328c34fa2647246d1db085ab34a423d54"} Jan 22 09:14:03 crc kubenswrapper[4892]: I0122 09:14:03.124792 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerStarted","Data":"0dc6b216667b83ba9589e8e09bef82bd9864e552a10a204d0e85f81d1191e5b8"} Jan 22 09:14:03 crc kubenswrapper[4892]: I0122 09:14:03.130804 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr8g" event={"ID":"de808d2e-2d5a-458c-a3ca-a6475a9fde39","Type":"ContainerStarted","Data":"09a29568e50257bf160df6f3314fd79624e4da69797a7c2c4205da93f0752b01"} Jan 22 09:14:03 crc kubenswrapper[4892]: I0122 09:14:03.165465 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4jr8g" podStartSLOduration=3.1567308609999998 podStartE2EDuration="59.165441358s" podCreationTimestamp="2026-01-22 09:13:04 +0000 UTC" firstStartedPulling="2026-01-22 09:13:06.977876077 +0000 UTC m=+156.821955140" lastFinishedPulling="2026-01-22 09:14:02.986586574 +0000 UTC m=+212.830665637" observedRunningTime="2026-01-22 09:14:03.164973996 +0000 UTC m=+213.009053059" watchObservedRunningTime="2026-01-22 09:14:03.165441358 +0000 UTC m=+213.009520421" Jan 22 09:14:03 crc kubenswrapper[4892]: E0122 09:14:03.248625 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode984ad3d_befb_48a6_a5e3_597b3a8d4ff8.slice/crio-0dc6b216667b83ba9589e8e09bef82bd9864e552a10a204d0e85f81d1191e5b8.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:14:04 crc kubenswrapper[4892]: I0122 09:14:04.137398 4892 generic.go:334] "Generic (PLEG): container finished" podID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerID="0dc6b216667b83ba9589e8e09bef82bd9864e552a10a204d0e85f81d1191e5b8" exitCode=0 Jan 22 09:14:04 crc kubenswrapper[4892]: I0122 09:14:04.137464 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerDied","Data":"0dc6b216667b83ba9589e8e09bef82bd9864e552a10a204d0e85f81d1191e5b8"} Jan 22 09:14:05 crc kubenswrapper[4892]: I0122 09:14:05.256924 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:14:05 crc kubenswrapper[4892]: I0122 09:14:05.257415 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:14:05 crc kubenswrapper[4892]: I0122 09:14:05.330849 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:14:06 crc kubenswrapper[4892]: I0122 09:14:06.161947 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerStarted","Data":"90d7e1995451ed3d956c93d22c15efebaca7b2a73bcfc119cf69a9ab4cf1eb0e"} Jan 22 09:14:06 crc kubenswrapper[4892]: I0122 09:14:06.182314 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-62vzg" podStartSLOduration=4.809436715 podStartE2EDuration="1m4.182276636s" podCreationTimestamp="2026-01-22 09:13:02 +0000 UTC" firstStartedPulling="2026-01-22 09:13:05.5701633 +0000 UTC m=+155.414242363" lastFinishedPulling="2026-01-22 09:14:04.943003221 +0000 UTC m=+214.787082284" observedRunningTime="2026-01-22 09:14:06.178656261 +0000 UTC m=+216.022735324" watchObservedRunningTime="2026-01-22 09:14:06.182276636 +0000 UTC m=+216.026355699" Jan 22 09:14:10 crc kubenswrapper[4892]: I0122 09:14:10.183154 4892 generic.go:334] "Generic (PLEG): container finished" podID="a1422496-2b87-44ce-b710-32e8de483ebd" containerID="9b95e7e47d9bcc517c1f95f73e660eeac606725407b669f6e8c7ae74785ee746" exitCode=0 Jan 22 09:14:10 crc kubenswrapper[4892]: I0122 09:14:10.183242 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7q5c" event={"ID":"a1422496-2b87-44ce-b710-32e8de483ebd","Type":"ContainerDied","Data":"9b95e7e47d9bcc517c1f95f73e660eeac606725407b669f6e8c7ae74785ee746"} Jan 22 09:14:10 crc kubenswrapper[4892]: I0122 09:14:10.187737 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerStarted","Data":"7c46f38bd4a4b35ba2639257f76c77d956ede2053108650167284e48b39704ab"} Jan 22 09:14:10 crc kubenswrapper[4892]: I0122 09:14:10.191171 4892 generic.go:334] "Generic (PLEG): container finished" podID="e788dce1-96bb-4768-950f-08abe5d34305" containerID="8623fbb5f9171b91ce90391cc0fbd576511fe05e17da39efdc4fcddc1e2fb28b" exitCode=0 Jan 22 09:14:10 crc kubenswrapper[4892]: I0122 09:14:10.191203 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k75r9" event={"ID":"e788dce1-96bb-4768-950f-08abe5d34305","Type":"ContainerDied","Data":"8623fbb5f9171b91ce90391cc0fbd576511fe05e17da39efdc4fcddc1e2fb28b"} Jan 22 09:14:11 crc kubenswrapper[4892]: I0122 09:14:11.196808 4892 generic.go:334] "Generic (PLEG): container finished" podID="b3591a30-3306-421b-a004-90ed127b1ac1" containerID="6b10fbe3dd1a95ea1459a8b9bbfedcb9383ad89b7e80589d3910ad59f19a31fc" exitCode=0 Jan 22 09:14:11 crc kubenswrapper[4892]: I0122 09:14:11.196890 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgbdj" event={"ID":"b3591a30-3306-421b-a004-90ed127b1ac1","Type":"ContainerDied","Data":"6b10fbe3dd1a95ea1459a8b9bbfedcb9383ad89b7e80589d3910ad59f19a31fc"} Jan 22 09:14:11 crc kubenswrapper[4892]: I0122 09:14:11.198566 4892 generic.go:334] "Generic (PLEG): container finished" podID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerID="7c46f38bd4a4b35ba2639257f76c77d956ede2053108650167284e48b39704ab" exitCode=0 Jan 22 09:14:11 crc kubenswrapper[4892]: I0122 09:14:11.198597 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerDied","Data":"7c46f38bd4a4b35ba2639257f76c77d956ede2053108650167284e48b39704ab"} Jan 22 09:14:12 crc kubenswrapper[4892]: I0122 09:14:12.205101 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k75r9" event={"ID":"e788dce1-96bb-4768-950f-08abe5d34305","Type":"ContainerStarted","Data":"197c933d1853e0c3f15ede074464ba833e6329f9620663f3e7ee58398a1bd7d1"} Jan 22 09:14:12 crc kubenswrapper[4892]: I0122 09:14:12.208896 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7q5c" event={"ID":"a1422496-2b87-44ce-b710-32e8de483ebd","Type":"ContainerStarted","Data":"2e626dec01c67ad7b3d4a8498378feb7f5898fb6af1ec75e34d5ebb5416160d2"} Jan 22 09:14:12 crc kubenswrapper[4892]: I0122 09:14:12.227395 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k75r9" podStartSLOduration=3.906654514 podStartE2EDuration="1m9.227371862s" podCreationTimestamp="2026-01-22 09:13:03 +0000 UTC" firstStartedPulling="2026-01-22 09:13:05.801749523 +0000 UTC m=+155.645828576" lastFinishedPulling="2026-01-22 09:14:11.122466861 +0000 UTC m=+220.966545924" observedRunningTime="2026-01-22 09:14:12.223814688 +0000 UTC m=+222.067893751" watchObservedRunningTime="2026-01-22 09:14:12.227371862 +0000 UTC m=+222.071450925" Jan 22 09:14:12 crc kubenswrapper[4892]: I0122 09:14:12.243981 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w7q5c" podStartSLOduration=4.837991896 podStartE2EDuration="1m10.243963868s" podCreationTimestamp="2026-01-22 09:13:02 +0000 UTC" firstStartedPulling="2026-01-22 09:13:05.886584092 +0000 UTC m=+155.730663155" lastFinishedPulling="2026-01-22 09:14:11.292556064 +0000 UTC m=+221.136635127" observedRunningTime="2026-01-22 09:14:12.241677148 +0000 UTC m=+222.085756211" watchObservedRunningTime="2026-01-22 09:14:12.243963868 +0000 UTC m=+222.088042921" Jan 22 09:14:12 crc kubenswrapper[4892]: I0122 09:14:12.904462 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:14:12 crc kubenswrapper[4892]: I0122 09:14:12.904531 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.009160 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.009228 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.049451 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.216751 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerStarted","Data":"cbb2680476d4ba316df68ebf127017db5ed5315535d7cc430052e9b755129996"} Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.218959 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgbdj" event={"ID":"b3591a30-3306-421b-a004-90ed127b1ac1","Type":"ContainerStarted","Data":"d49938e7adcdd5459c5266076a26b7fbfeefca96bdeb9f12473c3aa36d4c93d4"} Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.221367 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerStarted","Data":"0bb51b9ed69188e2c3307aab0be7ecd217ae6a53e00e7e7440462f9f2cb495b2"} Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.235256 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.235369 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.305587 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dgbdj" podStartSLOduration=4.182644966 podStartE2EDuration="1m9.305571341s" podCreationTimestamp="2026-01-22 09:13:04 +0000 UTC" firstStartedPulling="2026-01-22 09:13:06.934042965 +0000 UTC m=+156.778122028" lastFinishedPulling="2026-01-22 09:14:12.05696933 +0000 UTC m=+221.901048403" observedRunningTime="2026-01-22 09:14:13.302367526 +0000 UTC m=+223.146446589" watchObservedRunningTime="2026-01-22 09:14:13.305571341 +0000 UTC m=+223.149650404" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.321383 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.329276 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-99vdf" podStartSLOduration=5.059326976 podStartE2EDuration="1m11.329258874s" podCreationTimestamp="2026-01-22 09:13:02 +0000 UTC" firstStartedPulling="2026-01-22 09:13:05.622496564 +0000 UTC m=+155.466575627" lastFinishedPulling="2026-01-22 09:14:11.892428472 +0000 UTC m=+221.736507525" observedRunningTime="2026-01-22 09:14:13.32532724 +0000 UTC m=+223.169406313" watchObservedRunningTime="2026-01-22 09:14:13.329258874 +0000 UTC m=+223.173337937" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.462360 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.462448 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:14:13 crc kubenswrapper[4892]: I0122 09:14:13.956949 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w7q5c" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="registry-server" probeResult="failure" output=< Jan 22 09:14:13 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:14:13 crc kubenswrapper[4892]: > Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.229471 4892 generic.go:334] "Generic (PLEG): container finished" podID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerID="cbb2680476d4ba316df68ebf127017db5ed5315535d7cc430052e9b755129996" exitCode=0 Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.229604 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerDied","Data":"cbb2680476d4ba316df68ebf127017db5ed5315535d7cc430052e9b755129996"} Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.335090 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-99vdf" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="registry-server" probeResult="failure" output=< Jan 22 09:14:14 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:14:14 crc kubenswrapper[4892]: > Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.502342 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k75r9" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="registry-server" probeResult="failure" output=< Jan 22 09:14:14 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:14:14 crc kubenswrapper[4892]: > Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.825882 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.825944 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:14:14 crc kubenswrapper[4892]: I0122 09:14:14.900202 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:14:15 crc kubenswrapper[4892]: I0122 09:14:15.236318 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerStarted","Data":"f0cd7bbf432fb973b5dad780013d37d8f189138803a9f5a9d19d986d900607ec"} Jan 22 09:14:15 crc kubenswrapper[4892]: I0122 09:14:15.312773 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.243177 4892 generic.go:334] "Generic (PLEG): container finished" podID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerID="f0cd7bbf432fb973b5dad780013d37d8f189138803a9f5a9d19d986d900607ec" exitCode=0 Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.243251 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerDied","Data":"f0cd7bbf432fb973b5dad780013d37d8f189138803a9f5a9d19d986d900607ec"} Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.323301 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.323409 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.323462 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.324135 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:14:16 crc kubenswrapper[4892]: I0122 09:14:16.324252 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a" gracePeriod=600 Jan 22 09:14:17 crc kubenswrapper[4892]: I0122 09:14:17.257796 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a"} Jan 22 09:14:17 crc kubenswrapper[4892]: I0122 09:14:17.257849 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a" exitCode=0 Jan 22 09:14:17 crc kubenswrapper[4892]: I0122 09:14:17.719619 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr8g"] Jan 22 09:14:17 crc kubenswrapper[4892]: I0122 09:14:17.719852 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4jr8g" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="registry-server" containerID="cri-o://09a29568e50257bf160df6f3314fd79624e4da69797a7c2c4205da93f0752b01" gracePeriod=2 Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.265471 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerStarted","Data":"7fb8e7af446633854e7f7753dedbc7744eb80d8d0b7187e33b40037f179e0aaa"} Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.267811 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerStarted","Data":"bb3b59d466cbc7caa383ed5cbbec77fc044a940dec14abe8fe2fbbddfb3dbd8b"} Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.283548 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5djk8" podStartSLOduration=4.101276751 podStartE2EDuration="1m13.283529109s" podCreationTimestamp="2026-01-22 09:13:05 +0000 UTC" firstStartedPulling="2026-01-22 09:13:07.988834492 +0000 UTC m=+157.832913565" lastFinishedPulling="2026-01-22 09:14:17.17108686 +0000 UTC m=+227.015165923" observedRunningTime="2026-01-22 09:14:18.283093887 +0000 UTC m=+228.127172960" watchObservedRunningTime="2026-01-22 09:14:18.283529109 +0000 UTC m=+228.127608172" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.299179 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dd7gh" podStartSLOduration=2.914820169 podStartE2EDuration="1m13.29916497s" podCreationTimestamp="2026-01-22 09:13:05 +0000 UTC" firstStartedPulling="2026-01-22 09:13:06.95974184 +0000 UTC m=+156.803820903" lastFinishedPulling="2026-01-22 09:14:17.344086641 +0000 UTC m=+227.188165704" observedRunningTime="2026-01-22 09:14:18.295217946 +0000 UTC m=+228.139297009" watchObservedRunningTime="2026-01-22 09:14:18.29916497 +0000 UTC m=+228.143244033" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.806413 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4rzs7"] Jan 22 09:14:18 crc kubenswrapper[4892]: E0122 09:14:18.806758 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dc7f46-f078-43dc-95c7-6d33fdb63ab7" containerName="pruner" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.806781 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dc7f46-f078-43dc-95c7-6d33fdb63ab7" containerName="pruner" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.806972 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dc7f46-f078-43dc-95c7-6d33fdb63ab7" containerName="pruner" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.807735 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.823680 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4rzs7"] Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997023 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0191742c-4602-476c-9a6a-52600797194a-trusted-ca\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997069 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgr5\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-kube-api-access-6qgr5\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997108 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997134 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-registry-tls\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997168 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0191742c-4602-476c-9a6a-52600797194a-registry-certificates\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997190 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-bound-sa-token\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997223 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0191742c-4602-476c-9a6a-52600797194a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:18 crc kubenswrapper[4892]: I0122 09:14:18.997241 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0191742c-4602-476c-9a6a-52600797194a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.023767 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.098396 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qgr5\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-kube-api-access-6qgr5\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.098475 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-registry-tls\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.098521 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0191742c-4602-476c-9a6a-52600797194a-registry-certificates\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.098548 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-bound-sa-token\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.098588 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0191742c-4602-476c-9a6a-52600797194a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.098639 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0191742c-4602-476c-9a6a-52600797194a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.099161 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0191742c-4602-476c-9a6a-52600797194a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.099883 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0191742c-4602-476c-9a6a-52600797194a-registry-certificates\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.100608 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0191742c-4602-476c-9a6a-52600797194a-trusted-ca\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.099256 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0191742c-4602-476c-9a6a-52600797194a-trusted-ca\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.104934 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-registry-tls\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.107342 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0191742c-4602-476c-9a6a-52600797194a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.117005 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qgr5\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-kube-api-access-6qgr5\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.117169 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0191742c-4602-476c-9a6a-52600797194a-bound-sa-token\") pod \"image-registry-66df7c8f76-4rzs7\" (UID: \"0191742c-4602-476c-9a6a-52600797194a\") " pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.120411 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.279320 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"a7f0526153acdca2ca5f99af784bf184f41709f20620fb5551c5c6b34103a995"} Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.283167 4892 generic.go:334] "Generic (PLEG): container finished" podID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerID="09a29568e50257bf160df6f3314fd79624e4da69797a7c2c4205da93f0752b01" exitCode=0 Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.283212 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr8g" event={"ID":"de808d2e-2d5a-458c-a3ca-a6475a9fde39","Type":"ContainerDied","Data":"09a29568e50257bf160df6f3314fd79624e4da69797a7c2c4205da93f0752b01"} Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.514683 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4rzs7"] Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.780257 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.825169 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-catalog-content\") pod \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.825272 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shg9w\" (UniqueName: \"kubernetes.io/projected/de808d2e-2d5a-458c-a3ca-a6475a9fde39-kube-api-access-shg9w\") pod \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.825369 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-utilities\") pod \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\" (UID: \"de808d2e-2d5a-458c-a3ca-a6475a9fde39\") " Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.826085 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-utilities" (OuterVolumeSpecName: "utilities") pod "de808d2e-2d5a-458c-a3ca-a6475a9fde39" (UID: "de808d2e-2d5a-458c-a3ca-a6475a9fde39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.830277 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de808d2e-2d5a-458c-a3ca-a6475a9fde39-kube-api-access-shg9w" (OuterVolumeSpecName: "kube-api-access-shg9w") pod "de808d2e-2d5a-458c-a3ca-a6475a9fde39" (UID: "de808d2e-2d5a-458c-a3ca-a6475a9fde39"). InnerVolumeSpecName "kube-api-access-shg9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.847477 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de808d2e-2d5a-458c-a3ca-a6475a9fde39" (UID: "de808d2e-2d5a-458c-a3ca-a6475a9fde39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.926552 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.926615 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de808d2e-2d5a-458c-a3ca-a6475a9fde39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:19 crc kubenswrapper[4892]: I0122 09:14:19.926630 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shg9w\" (UniqueName: \"kubernetes.io/projected/de808d2e-2d5a-458c-a3ca-a6475a9fde39-kube-api-access-shg9w\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.289203 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr8g" event={"ID":"de808d2e-2d5a-458c-a3ca-a6475a9fde39","Type":"ContainerDied","Data":"bc6cfdd533d3c8d6a6269cc735847b0c16d804f3ad7ede0303f2fe156662d4c0"} Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.289511 4892 scope.go:117] "RemoveContainer" containerID="09a29568e50257bf160df6f3314fd79624e4da69797a7c2c4205da93f0752b01" Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.289641 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr8g" Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.294850 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" event={"ID":"0191742c-4602-476c-9a6a-52600797194a","Type":"ContainerStarted","Data":"30c92fadb5f22396885199d310de33bf79fc0bb8d84325679e42c76c703dc120"} Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.294891 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" event={"ID":"0191742c-4602-476c-9a6a-52600797194a","Type":"ContainerStarted","Data":"6b461c9ac0555da42d66ea8f203e5924d9b08439dc8799ae065c217755d8a652"} Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.314051 4892 scope.go:117] "RemoveContainer" containerID="6ec254b8073080f3b9bce90c846a3f8328c34fa2647246d1db085ab34a423d54" Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.334640 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr8g"] Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.338334 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr8g"] Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.350708 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" podStartSLOduration=2.350688719 podStartE2EDuration="2.350688719s" podCreationTimestamp="2026-01-22 09:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:14:20.35031843 +0000 UTC m=+230.194397483" watchObservedRunningTime="2026-01-22 09:14:20.350688719 +0000 UTC m=+230.194767782" Jan 22 09:14:20 crc kubenswrapper[4892]: I0122 09:14:20.364597 4892 scope.go:117] "RemoveContainer" containerID="637f609bf97f5506b83ac1743f17fd43d2f2ee072385a0e96b9081001763dc23" Jan 22 09:14:21 crc kubenswrapper[4892]: I0122 09:14:21.304186 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:21 crc kubenswrapper[4892]: I0122 09:14:21.432518 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" path="/var/lib/kubelet/pods/de808d2e-2d5a-458c-a3ca-a6475a9fde39/volumes" Jan 22 09:14:22 crc kubenswrapper[4892]: I0122 09:14:22.946591 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:14:22 crc kubenswrapper[4892]: I0122 09:14:22.987434 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:14:23 crc kubenswrapper[4892]: I0122 09:14:23.274764 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:14:23 crc kubenswrapper[4892]: I0122 09:14:23.315506 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:14:23 crc kubenswrapper[4892]: I0122 09:14:23.512432 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:14:23 crc kubenswrapper[4892]: I0122 09:14:23.570118 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:14:24 crc kubenswrapper[4892]: I0122 09:14:24.523712 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k75r9"] Jan 22 09:14:24 crc kubenswrapper[4892]: I0122 09:14:24.868273 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.320650 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k75r9" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="registry-server" containerID="cri-o://197c933d1853e0c3f15ede074464ba833e6329f9620663f3e7ee58398a1bd7d1" gracePeriod=2 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.543233 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.546773 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-99vdf" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="registry-server" containerID="cri-o://0bb51b9ed69188e2c3307aab0be7ecd217ae6a53e00e7e7440462f9f2cb495b2" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.548151 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7q5c"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.548395 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w7q5c" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="registry-server" containerID="cri-o://2e626dec01c67ad7b3d4a8498378feb7f5898fb6af1ec75e34d5ebb5416160d2" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.558508 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62vzg"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.558746 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-62vzg" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="registry-server" containerID="cri-o://90d7e1995451ed3d956c93d22c15efebaca7b2a73bcfc119cf69a9ab4cf1eb0e" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.571209 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56s7c"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.572580 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" podUID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" containerName="marketplace-operator" containerID="cri-o://0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.595372 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgbdj"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.595664 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dgbdj" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="registry-server" containerID="cri-o://d49938e7adcdd5459c5266076a26b7fbfeefca96bdeb9f12473c3aa36d4c93d4" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.599368 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xt4nm"] Jan 22 09:14:25 crc kubenswrapper[4892]: E0122 09:14:25.599635 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="registry-server" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.599651 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="registry-server" Jan 22 09:14:25 crc kubenswrapper[4892]: E0122 09:14:25.599660 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="extract-content" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.599667 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="extract-content" Jan 22 09:14:25 crc kubenswrapper[4892]: E0122 09:14:25.599680 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="extract-utilities" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.599686 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="extract-utilities" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.599791 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="de808d2e-2d5a-458c-a3ca-a6475a9fde39" containerName="registry-server" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.600175 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.602457 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5djk8"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.602642 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5djk8" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="registry-server" containerID="cri-o://7fb8e7af446633854e7f7753dedbc7744eb80d8d0b7187e33b40037f179e0aaa" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.605626 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dd7gh"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.605780 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dd7gh" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="registry-server" containerID="cri-o://bb3b59d466cbc7caa383ed5cbbec77fc044a940dec14abe8fe2fbbddfb3dbd8b" gracePeriod=30 Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.608096 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xt4nm"] Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.705347 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02e012df-582c-41ec-9c63-ff6dd7cc08c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.705421 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvwc6\" (UniqueName: \"kubernetes.io/projected/02e012df-582c-41ec-9c63-ff6dd7cc08c6-kube-api-access-jvwc6\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.705508 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02e012df-582c-41ec-9c63-ff6dd7cc08c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.806795 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02e012df-582c-41ec-9c63-ff6dd7cc08c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.807159 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02e012df-582c-41ec-9c63-ff6dd7cc08c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.807183 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvwc6\" (UniqueName: \"kubernetes.io/projected/02e012df-582c-41ec-9c63-ff6dd7cc08c6-kube-api-access-jvwc6\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.808486 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02e012df-582c-41ec-9c63-ff6dd7cc08c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.813881 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02e012df-582c-41ec-9c63-ff6dd7cc08c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.823297 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvwc6\" (UniqueName: \"kubernetes.io/projected/02e012df-582c-41ec-9c63-ff6dd7cc08c6-kube-api-access-jvwc6\") pod \"marketplace-operator-79b997595-xt4nm\" (UID: \"02e012df-582c-41ec-9c63-ff6dd7cc08c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:25 crc kubenswrapper[4892]: I0122 09:14:25.915972 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:26 crc kubenswrapper[4892]: I0122 09:14:26.003799 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:14:26 crc kubenswrapper[4892]: I0122 09:14:26.282313 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xt4nm"] Jan 22 09:14:26 crc kubenswrapper[4892]: W0122 09:14:26.287988 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e012df_582c_41ec_9c63_ff6dd7cc08c6.slice/crio-ba06617af479e3b319ad08c5ea5484677b731b5cca087b31d7517412d61f98ee WatchSource:0}: Error finding container ba06617af479e3b319ad08c5ea5484677b731b5cca087b31d7517412d61f98ee: Status 404 returned error can't find the container with id ba06617af479e3b319ad08c5ea5484677b731b5cca087b31d7517412d61f98ee Jan 22 09:14:26 crc kubenswrapper[4892]: I0122 09:14:26.326111 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" event={"ID":"02e012df-582c-41ec-9c63-ff6dd7cc08c6","Type":"ContainerStarted","Data":"ba06617af479e3b319ad08c5ea5484677b731b5cca087b31d7517412d61f98ee"} Jan 22 09:14:26 crc kubenswrapper[4892]: I0122 09:14:26.381124 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:14:26 crc kubenswrapper[4892]: I0122 09:14:26.506439 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j4vqv"] Jan 22 09:14:26 crc kubenswrapper[4892]: I0122 09:14:26.732971 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.290736 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.332899 4892 generic.go:334] "Generic (PLEG): container finished" podID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerID="0bb51b9ed69188e2c3307aab0be7ecd217ae6a53e00e7e7440462f9f2cb495b2" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.332934 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerDied","Data":"0bb51b9ed69188e2c3307aab0be7ecd217ae6a53e00e7e7440462f9f2cb495b2"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.335361 4892 generic.go:334] "Generic (PLEG): container finished" podID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerID="7fb8e7af446633854e7f7753dedbc7744eb80d8d0b7187e33b40037f179e0aaa" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.335436 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerDied","Data":"7fb8e7af446633854e7f7753dedbc7744eb80d8d0b7187e33b40037f179e0aaa"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.337980 4892 generic.go:334] "Generic (PLEG): container finished" podID="e788dce1-96bb-4768-950f-08abe5d34305" containerID="197c933d1853e0c3f15ede074464ba833e6329f9620663f3e7ee58398a1bd7d1" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.338032 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k75r9" event={"ID":"e788dce1-96bb-4768-950f-08abe5d34305","Type":"ContainerDied","Data":"197c933d1853e0c3f15ede074464ba833e6329f9620663f3e7ee58398a1bd7d1"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.340391 4892 generic.go:334] "Generic (PLEG): container finished" podID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" containerID="0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.340443 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" event={"ID":"fa7d6587-5137-4b9b-accb-3b4800c1bce6","Type":"ContainerDied","Data":"0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.340469 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" event={"ID":"fa7d6587-5137-4b9b-accb-3b4800c1bce6","Type":"ContainerDied","Data":"0bfe254aa2d484477c6c89466dd95a27aaae97d1b415d8faf04072499e44313f"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.340495 4892 scope.go:117] "RemoveContainer" containerID="0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.340589 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-56s7c" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.343906 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" event={"ID":"02e012df-582c-41ec-9c63-ff6dd7cc08c6","Type":"ContainerStarted","Data":"c0819f7198f0bac8e633181d177565266df8497118809fe813a5dca359541bb3"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.344311 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.347545 4892 generic.go:334] "Generic (PLEG): container finished" podID="a1422496-2b87-44ce-b710-32e8de483ebd" containerID="2e626dec01c67ad7b3d4a8498378feb7f5898fb6af1ec75e34d5ebb5416160d2" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.347594 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7q5c" event={"ID":"a1422496-2b87-44ce-b710-32e8de483ebd","Type":"ContainerDied","Data":"2e626dec01c67ad7b3d4a8498378feb7f5898fb6af1ec75e34d5ebb5416160d2"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.349143 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.350400 4892 generic.go:334] "Generic (PLEG): container finished" podID="b3591a30-3306-421b-a004-90ed127b1ac1" containerID="d49938e7adcdd5459c5266076a26b7fbfeefca96bdeb9f12473c3aa36d4c93d4" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.350442 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgbdj" event={"ID":"b3591a30-3306-421b-a004-90ed127b1ac1","Type":"ContainerDied","Data":"d49938e7adcdd5459c5266076a26b7fbfeefca96bdeb9f12473c3aa36d4c93d4"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.353364 4892 generic.go:334] "Generic (PLEG): container finished" podID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerID="90d7e1995451ed3d956c93d22c15efebaca7b2a73bcfc119cf69a9ab4cf1eb0e" exitCode=0 Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.353390 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerDied","Data":"90d7e1995451ed3d956c93d22c15efebaca7b2a73bcfc119cf69a9ab4cf1eb0e"} Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.358013 4892 scope.go:117] "RemoveContainer" containerID="0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df" Jan 22 09:14:27 crc kubenswrapper[4892]: E0122 09:14:27.360175 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df\": container with ID starting with 0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df not found: ID does not exist" containerID="0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.360207 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df"} err="failed to get container status \"0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df\": rpc error: code = NotFound desc = could not find container \"0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df\": container with ID starting with 0e40c8beb807454cb29b37ba034642a70c19c4ab81643c0ac90bb0b5a53235df not found: ID does not exist" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.361040 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" podStartSLOduration=2.361029831 podStartE2EDuration="2.361029831s" podCreationTimestamp="2026-01-22 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:14:27.360569219 +0000 UTC m=+237.204648282" watchObservedRunningTime="2026-01-22 09:14:27.361029831 +0000 UTC m=+237.205108894" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.439115 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-trusted-ca\") pod \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.439832 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fa7d6587-5137-4b9b-accb-3b4800c1bce6" (UID: "fa7d6587-5137-4b9b-accb-3b4800c1bce6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.439894 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h59xg\" (UniqueName: \"kubernetes.io/projected/fa7d6587-5137-4b9b-accb-3b4800c1bce6-kube-api-access-h59xg\") pod \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.440464 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-operator-metrics\") pod \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\" (UID: \"fa7d6587-5137-4b9b-accb-3b4800c1bce6\") " Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.440801 4892 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.444703 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fa7d6587-5137-4b9b-accb-3b4800c1bce6" (UID: "fa7d6587-5137-4b9b-accb-3b4800c1bce6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.444709 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7d6587-5137-4b9b-accb-3b4800c1bce6-kube-api-access-h59xg" (OuterVolumeSpecName: "kube-api-access-h59xg") pod "fa7d6587-5137-4b9b-accb-3b4800c1bce6" (UID: "fa7d6587-5137-4b9b-accb-3b4800c1bce6"). InnerVolumeSpecName "kube-api-access-h59xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.541877 4892 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa7d6587-5137-4b9b-accb-3b4800c1bce6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.542219 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h59xg\" (UniqueName: \"kubernetes.io/projected/fa7d6587-5137-4b9b-accb-3b4800c1bce6-kube-api-access-h59xg\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.662798 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56s7c"] Jan 22 09:14:27 crc kubenswrapper[4892]: I0122 09:14:27.664657 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-56s7c"] Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.360436 4892 generic.go:334] "Generic (PLEG): container finished" podID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerID="bb3b59d466cbc7caa383ed5cbbec77fc044a940dec14abe8fe2fbbddfb3dbd8b" exitCode=0 Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.360535 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerDied","Data":"bb3b59d466cbc7caa383ed5cbbec77fc044a940dec14abe8fe2fbbddfb3dbd8b"} Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.580171 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.656998 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2lhx\" (UniqueName: \"kubernetes.io/projected/b3591a30-3306-421b-a004-90ed127b1ac1-kube-api-access-t2lhx\") pod \"b3591a30-3306-421b-a004-90ed127b1ac1\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.657082 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-catalog-content\") pod \"b3591a30-3306-421b-a004-90ed127b1ac1\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.657114 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-utilities\") pod \"b3591a30-3306-421b-a004-90ed127b1ac1\" (UID: \"b3591a30-3306-421b-a004-90ed127b1ac1\") " Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.657791 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-utilities" (OuterVolumeSpecName: "utilities") pod "b3591a30-3306-421b-a004-90ed127b1ac1" (UID: "b3591a30-3306-421b-a004-90ed127b1ac1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.662054 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3591a30-3306-421b-a004-90ed127b1ac1-kube-api-access-t2lhx" (OuterVolumeSpecName: "kube-api-access-t2lhx") pod "b3591a30-3306-421b-a004-90ed127b1ac1" (UID: "b3591a30-3306-421b-a004-90ed127b1ac1"). InnerVolumeSpecName "kube-api-access-t2lhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.677204 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.680698 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3591a30-3306-421b-a004-90ed127b1ac1" (UID: "b3591a30-3306-421b-a004-90ed127b1ac1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761070 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnh7q\" (UniqueName: \"kubernetes.io/projected/0b80f630-aab1-47ce-87ba-f8f753f5664a-kube-api-access-gnh7q\") pod \"0b80f630-aab1-47ce-87ba-f8f753f5664a\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761312 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-utilities\") pod \"0b80f630-aab1-47ce-87ba-f8f753f5664a\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761554 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-catalog-content\") pod \"0b80f630-aab1-47ce-87ba-f8f753f5664a\" (UID: \"0b80f630-aab1-47ce-87ba-f8f753f5664a\") " Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761936 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-utilities" (OuterVolumeSpecName: "utilities") pod "0b80f630-aab1-47ce-87ba-f8f753f5664a" (UID: "0b80f630-aab1-47ce-87ba-f8f753f5664a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761959 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761975 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3591a30-3306-421b-a004-90ed127b1ac1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.761984 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2lhx\" (UniqueName: \"kubernetes.io/projected/b3591a30-3306-421b-a004-90ed127b1ac1-kube-api-access-t2lhx\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.764564 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b80f630-aab1-47ce-87ba-f8f753f5664a-kube-api-access-gnh7q" (OuterVolumeSpecName: "kube-api-access-gnh7q") pod "0b80f630-aab1-47ce-87ba-f8f753f5664a" (UID: "0b80f630-aab1-47ce-87ba-f8f753f5664a"). InnerVolumeSpecName "kube-api-access-gnh7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.819660 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b80f630-aab1-47ce-87ba-f8f753f5664a" (UID: "0b80f630-aab1-47ce-87ba-f8f753f5664a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.863830 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnh7q\" (UniqueName: \"kubernetes.io/projected/0b80f630-aab1-47ce-87ba-f8f753f5664a-kube-api-access-gnh7q\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.863857 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.863866 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80f630-aab1-47ce-87ba-f8f753f5664a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.893185 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.992705 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:14:28 crc kubenswrapper[4892]: I0122 09:14:28.996405 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.046883 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.052969 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.067365 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-utilities\") pod \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.067531 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd8x6\" (UniqueName: \"kubernetes.io/projected/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-kube-api-access-fd8x6\") pod \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.067561 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-catalog-content\") pod \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\" (UID: \"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.068158 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-utilities" (OuterVolumeSpecName: "utilities") pod "e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" (UID: "e984ad3d-befb-48a6-a5e3-597b3a8d4ff8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.070656 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-kube-api-access-fd8x6" (OuterVolumeSpecName: "kube-api-access-fd8x6") pod "e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" (UID: "e984ad3d-befb-48a6-a5e3-597b3a8d4ff8"). InnerVolumeSpecName "kube-api-access-fd8x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.121199 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" (UID: "e984ad3d-befb-48a6-a5e3-597b3a8d4ff8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168768 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd8rm\" (UniqueName: \"kubernetes.io/projected/a1422496-2b87-44ce-b710-32e8de483ebd-kube-api-access-qd8rm\") pod \"a1422496-2b87-44ce-b710-32e8de483ebd\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168819 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpbv\" (UniqueName: \"kubernetes.io/projected/bbbc6b45-55e0-4f41-b54e-ee1fda017554-kube-api-access-fxpbv\") pod \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168849 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-catalog-content\") pod \"e788dce1-96bb-4768-950f-08abe5d34305\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168875 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qh65\" (UniqueName: \"kubernetes.io/projected/e788dce1-96bb-4768-950f-08abe5d34305-kube-api-access-6qh65\") pod \"e788dce1-96bb-4768-950f-08abe5d34305\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168908 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-utilities\") pod \"a1422496-2b87-44ce-b710-32e8de483ebd\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168929 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-catalog-content\") pod \"a1422496-2b87-44ce-b710-32e8de483ebd\" (UID: \"a1422496-2b87-44ce-b710-32e8de483ebd\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.168947 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-catalog-content\") pod \"63f7de77-6394-48d0-94f9-7b43fb7b715b\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169001 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-utilities\") pod \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169032 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7rz\" (UniqueName: \"kubernetes.io/projected/63f7de77-6394-48d0-94f9-7b43fb7b715b-kube-api-access-8t7rz\") pod \"63f7de77-6394-48d0-94f9-7b43fb7b715b\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169058 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-utilities\") pod \"e788dce1-96bb-4768-950f-08abe5d34305\" (UID: \"e788dce1-96bb-4768-950f-08abe5d34305\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169108 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-utilities\") pod \"63f7de77-6394-48d0-94f9-7b43fb7b715b\" (UID: \"63f7de77-6394-48d0-94f9-7b43fb7b715b\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169133 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-catalog-content\") pod \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\" (UID: \"bbbc6b45-55e0-4f41-b54e-ee1fda017554\") " Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169399 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd8x6\" (UniqueName: \"kubernetes.io/projected/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-kube-api-access-fd8x6\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169423 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169436 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.169765 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-utilities" (OuterVolumeSpecName: "utilities") pod "a1422496-2b87-44ce-b710-32e8de483ebd" (UID: "a1422496-2b87-44ce-b710-32e8de483ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.170075 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-utilities" (OuterVolumeSpecName: "utilities") pod "bbbc6b45-55e0-4f41-b54e-ee1fda017554" (UID: "bbbc6b45-55e0-4f41-b54e-ee1fda017554"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.171301 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-utilities" (OuterVolumeSpecName: "utilities") pod "e788dce1-96bb-4768-950f-08abe5d34305" (UID: "e788dce1-96bb-4768-950f-08abe5d34305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.172697 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbc6b45-55e0-4f41-b54e-ee1fda017554-kube-api-access-fxpbv" (OuterVolumeSpecName: "kube-api-access-fxpbv") pod "bbbc6b45-55e0-4f41-b54e-ee1fda017554" (UID: "bbbc6b45-55e0-4f41-b54e-ee1fda017554"). InnerVolumeSpecName "kube-api-access-fxpbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.172720 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-utilities" (OuterVolumeSpecName: "utilities") pod "63f7de77-6394-48d0-94f9-7b43fb7b715b" (UID: "63f7de77-6394-48d0-94f9-7b43fb7b715b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.173239 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f7de77-6394-48d0-94f9-7b43fb7b715b-kube-api-access-8t7rz" (OuterVolumeSpecName: "kube-api-access-8t7rz") pod "63f7de77-6394-48d0-94f9-7b43fb7b715b" (UID: "63f7de77-6394-48d0-94f9-7b43fb7b715b"). InnerVolumeSpecName "kube-api-access-8t7rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.174368 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e788dce1-96bb-4768-950f-08abe5d34305-kube-api-access-6qh65" (OuterVolumeSpecName: "kube-api-access-6qh65") pod "e788dce1-96bb-4768-950f-08abe5d34305" (UID: "e788dce1-96bb-4768-950f-08abe5d34305"). InnerVolumeSpecName "kube-api-access-6qh65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.175021 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1422496-2b87-44ce-b710-32e8de483ebd-kube-api-access-qd8rm" (OuterVolumeSpecName: "kube-api-access-qd8rm") pod "a1422496-2b87-44ce-b710-32e8de483ebd" (UID: "a1422496-2b87-44ce-b710-32e8de483ebd"). InnerVolumeSpecName "kube-api-access-qd8rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.231158 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1422496-2b87-44ce-b710-32e8de483ebd" (UID: "a1422496-2b87-44ce-b710-32e8de483ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.242331 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e788dce1-96bb-4768-950f-08abe5d34305" (UID: "e788dce1-96bb-4768-950f-08abe5d34305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.270919 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd8rm\" (UniqueName: \"kubernetes.io/projected/a1422496-2b87-44ce-b710-32e8de483ebd-kube-api-access-qd8rm\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.270961 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpbv\" (UniqueName: \"kubernetes.io/projected/bbbc6b45-55e0-4f41-b54e-ee1fda017554-kube-api-access-fxpbv\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.270976 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.270990 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qh65\" (UniqueName: \"kubernetes.io/projected/e788dce1-96bb-4768-950f-08abe5d34305-kube-api-access-6qh65\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.271000 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.271012 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1422496-2b87-44ce-b710-32e8de483ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.271023 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.271033 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t7rz\" (UniqueName: \"kubernetes.io/projected/63f7de77-6394-48d0-94f9-7b43fb7b715b-kube-api-access-8t7rz\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.271043 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e788dce1-96bb-4768-950f-08abe5d34305-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.271055 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.304094 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63f7de77-6394-48d0-94f9-7b43fb7b715b" (UID: "63f7de77-6394-48d0-94f9-7b43fb7b715b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.306943 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbbc6b45-55e0-4f41-b54e-ee1fda017554" (UID: "bbbc6b45-55e0-4f41-b54e-ee1fda017554"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.370825 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7q5c" event={"ID":"a1422496-2b87-44ce-b710-32e8de483ebd","Type":"ContainerDied","Data":"341f9cf1e505248470c332898eccdea5540e5d241e961f8e9ca208f76a718db1"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.370879 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7q5c" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.370883 4892 scope.go:117] "RemoveContainer" containerID="2e626dec01c67ad7b3d4a8498378feb7f5898fb6af1ec75e34d5ebb5416160d2" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.371841 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbc6b45-55e0-4f41-b54e-ee1fda017554-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.371868 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f7de77-6394-48d0-94f9-7b43fb7b715b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.373677 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgbdj" event={"ID":"b3591a30-3306-421b-a004-90ed127b1ac1","Type":"ContainerDied","Data":"45087bb15ffe494efb75efec55a095c8c041b85314a3b9a338b6e7d9b22aa876"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.373856 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgbdj" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.377898 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vzg" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.377948 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vzg" event={"ID":"e984ad3d-befb-48a6-a5e3-597b3a8d4ff8","Type":"ContainerDied","Data":"4aea12c815c9a0438bbc3e3ce365d23bf46068bd05e1336c52c33db9b5ac0efd"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.381368 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dd7gh" event={"ID":"63f7de77-6394-48d0-94f9-7b43fb7b715b","Type":"ContainerDied","Data":"0f98dcf222f60babd58a7443929db6f2c089ae153d41dfdf52ec383cd095a5ec"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.381405 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dd7gh" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.383100 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99vdf" event={"ID":"0b80f630-aab1-47ce-87ba-f8f753f5664a","Type":"ContainerDied","Data":"23b7ef1e7d19d338963b9f05b80d5fffb2692913e455c5c403d32fc289b57202"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.383216 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99vdf" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.392124 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5djk8" event={"ID":"bbbc6b45-55e0-4f41-b54e-ee1fda017554","Type":"ContainerDied","Data":"903c40188b9cbf272f8f9d7eccb475301b69157f62503cd0dffa52154036ce86"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.392251 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5djk8" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.396835 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k75r9" event={"ID":"e788dce1-96bb-4768-950f-08abe5d34305","Type":"ContainerDied","Data":"91f07f6f38ec6ae40131e1f760032e35296fa861f1907cc815c08d6ac32fba8b"} Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.396874 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k75r9" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.398997 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7q5c"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.404464 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w7q5c"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.405203 4892 scope.go:117] "RemoveContainer" containerID="9b95e7e47d9bcc517c1f95f73e660eeac606725407b669f6e8c7ae74785ee746" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.441124 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" path="/var/lib/kubelet/pods/a1422496-2b87-44ce-b710-32e8de483ebd/volumes" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.442252 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" path="/var/lib/kubelet/pods/fa7d6587-5137-4b9b-accb-3b4800c1bce6/volumes" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.443239 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62vzg"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.443268 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-62vzg"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.443334 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgbdj"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.443349 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgbdj"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.445185 4892 scope.go:117] "RemoveContainer" containerID="ddb646b53b6ab0142db5173cdddaa817fe8d139e595aa7dbc7cdd244ac1ef93f" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.450952 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.457402 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-99vdf"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.464700 4892 scope.go:117] "RemoveContainer" containerID="d49938e7adcdd5459c5266076a26b7fbfeefca96bdeb9f12473c3aa36d4c93d4" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.469269 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5djk8"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.474470 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5djk8"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.478720 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k75r9"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.482042 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k75r9"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.486407 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dd7gh"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.489693 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dd7gh"] Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.490617 4892 scope.go:117] "RemoveContainer" containerID="6b10fbe3dd1a95ea1459a8b9bbfedcb9383ad89b7e80589d3910ad59f19a31fc" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.503815 4892 scope.go:117] "RemoveContainer" containerID="d4931b0d49bb33b5a09fdf2d7ac88996ee9038f39883e192a8043ceaf5bee4ab" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.523222 4892 scope.go:117] "RemoveContainer" containerID="90d7e1995451ed3d956c93d22c15efebaca7b2a73bcfc119cf69a9ab4cf1eb0e" Jan 22 09:14:29 crc kubenswrapper[4892]: I0122 09:14:29.535659 4892 scope.go:117] "RemoveContainer" containerID="0dc6b216667b83ba9589e8e09bef82bd9864e552a10a204d0e85f81d1191e5b8" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.879122 4892 scope.go:117] "RemoveContainer" containerID="f7bd6c14f1560fb0ea3ee393e5d6d6289efa8b889182d6fb6d54d03e1ee6d97c" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.903718 4892 scope.go:117] "RemoveContainer" containerID="bb3b59d466cbc7caa383ed5cbbec77fc044a940dec14abe8fe2fbbddfb3dbd8b" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.915904 4892 scope.go:117] "RemoveContainer" containerID="f0cd7bbf432fb973b5dad780013d37d8f189138803a9f5a9d19d986d900607ec" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.932080 4892 scope.go:117] "RemoveContainer" containerID="ee9be3424a622800fef31fe6f7794ced466a0de082e7cffa5e8a05bb4681cb0e" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.950194 4892 scope.go:117] "RemoveContainer" containerID="0bb51b9ed69188e2c3307aab0be7ecd217ae6a53e00e7e7440462f9f2cb495b2" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.967487 4892 scope.go:117] "RemoveContainer" containerID="7c46f38bd4a4b35ba2639257f76c77d956ede2053108650167284e48b39704ab" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.981842 4892 scope.go:117] "RemoveContainer" containerID="e94eb76a3000d5a07046d0e6ca053fb8fdf4b49a0babc704cbd01c1af67832d3" Jan 22 09:14:30 crc kubenswrapper[4892]: I0122 09:14:30.996148 4892 scope.go:117] "RemoveContainer" containerID="7fb8e7af446633854e7f7753dedbc7744eb80d8d0b7187e33b40037f179e0aaa" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.011073 4892 scope.go:117] "RemoveContainer" containerID="cbb2680476d4ba316df68ebf127017db5ed5315535d7cc430052e9b755129996" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.022762 4892 scope.go:117] "RemoveContainer" containerID="06a200eea161e5863e695f6b32538db87fadb8b5a5c85067081c46c1f5fa2654" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.034186 4892 scope.go:117] "RemoveContainer" containerID="197c933d1853e0c3f15ede074464ba833e6329f9620663f3e7ee58398a1bd7d1" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.045200 4892 scope.go:117] "RemoveContainer" containerID="8623fbb5f9171b91ce90391cc0fbd576511fe05e17da39efdc4fcddc1e2fb28b" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.058441 4892 scope.go:117] "RemoveContainer" containerID="9041398a9898d05f4f3f000754d3f59aa7cf83cb89bc247de8f3aa1dadcabd5f" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128097 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7pjdw"] Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128338 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128353 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128367 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" containerName="marketplace-operator" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128377 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" containerName="marketplace-operator" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128387 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128396 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128408 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128416 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128428 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128436 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128448 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128456 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128483 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128492 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128508 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128517 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128531 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128539 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128548 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128556 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128567 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128575 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128588 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128596 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128606 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128614 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128623 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128631 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128641 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128649 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128658 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128669 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128685 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128696 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128705 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128714 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128726 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128734 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="extract-utilities" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128747 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128756 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128765 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128773 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: E0122 09:14:31.128785 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128793 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="extract-content" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128922 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128936 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1422496-2b87-44ce-b710-32e8de483ebd" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128946 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128959 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128972 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e788dce1-96bb-4768-950f-08abe5d34305" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128981 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.128992 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7d6587-5137-4b9b-accb-3b4800c1bce6" containerName="marketplace-operator" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.129005 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" containerName="registry-server" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.129975 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.131995 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.138273 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pjdw"] Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.196680 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nlx\" (UniqueName: \"kubernetes.io/projected/1c53bdc3-44ab-4be2-9f83-2d241776a337-kube-api-access-b6nlx\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.196722 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c53bdc3-44ab-4be2-9f83-2d241776a337-catalog-content\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.196756 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c53bdc3-44ab-4be2-9f83-2d241776a337-utilities\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.298027 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nlx\" (UniqueName: \"kubernetes.io/projected/1c53bdc3-44ab-4be2-9f83-2d241776a337-kube-api-access-b6nlx\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.298076 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c53bdc3-44ab-4be2-9f83-2d241776a337-catalog-content\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.298113 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c53bdc3-44ab-4be2-9f83-2d241776a337-utilities\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.298707 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c53bdc3-44ab-4be2-9f83-2d241776a337-utilities\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.298987 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c53bdc3-44ab-4be2-9f83-2d241776a337-catalog-content\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.317839 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nlx\" (UniqueName: \"kubernetes.io/projected/1c53bdc3-44ab-4be2-9f83-2d241776a337-kube-api-access-b6nlx\") pod \"redhat-marketplace-7pjdw\" (UID: \"1c53bdc3-44ab-4be2-9f83-2d241776a337\") " pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.325028 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p8m4r"] Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.326190 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.334961 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.338702 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8m4r"] Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.398829 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htd9h\" (UniqueName: \"kubernetes.io/projected/5524172a-41d9-4206-b133-ff86aa15f588-kube-api-access-htd9h\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.398867 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-utilities\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.398943 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-catalog-content\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.425401 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b80f630-aab1-47ce-87ba-f8f753f5664a" path="/var/lib/kubelet/pods/0b80f630-aab1-47ce-87ba-f8f753f5664a/volumes" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.426232 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f7de77-6394-48d0-94f9-7b43fb7b715b" path="/var/lib/kubelet/pods/63f7de77-6394-48d0-94f9-7b43fb7b715b/volumes" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.427010 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3591a30-3306-421b-a004-90ed127b1ac1" path="/var/lib/kubelet/pods/b3591a30-3306-421b-a004-90ed127b1ac1/volumes" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.428297 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbc6b45-55e0-4f41-b54e-ee1fda017554" path="/var/lib/kubelet/pods/bbbc6b45-55e0-4f41-b54e-ee1fda017554/volumes" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.428937 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e788dce1-96bb-4768-950f-08abe5d34305" path="/var/lib/kubelet/pods/e788dce1-96bb-4768-950f-08abe5d34305/volumes" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.429978 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e984ad3d-befb-48a6-a5e3-597b3a8d4ff8" path="/var/lib/kubelet/pods/e984ad3d-befb-48a6-a5e3-597b3a8d4ff8/volumes" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.456342 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.464834 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.500568 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htd9h\" (UniqueName: \"kubernetes.io/projected/5524172a-41d9-4206-b133-ff86aa15f588-kube-api-access-htd9h\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.500631 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-utilities\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.500705 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-catalog-content\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.501107 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-catalog-content\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.501948 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-utilities\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.519824 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htd9h\" (UniqueName: \"kubernetes.io/projected/5524172a-41d9-4206-b133-ff86aa15f588-kube-api-access-htd9h\") pod \"certified-operators-p8m4r\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.635778 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pjdw"] Jan 22 09:14:31 crc kubenswrapper[4892]: W0122 09:14:31.644995 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c53bdc3_44ab_4be2_9f83_2d241776a337.slice/crio-4cca1680e944d23231ac15c116c7c9cfd26206cd81c16d213a12517863cc059c WatchSource:0}: Error finding container 4cca1680e944d23231ac15c116c7c9cfd26206cd81c16d213a12517863cc059c: Status 404 returned error can't find the container with id 4cca1680e944d23231ac15c116c7c9cfd26206cd81c16d213a12517863cc059c Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.648427 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:14:31 crc kubenswrapper[4892]: I0122 09:14:31.657626 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.065746 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8m4r"] Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.435184 4892 generic.go:334] "Generic (PLEG): container finished" podID="5524172a-41d9-4206-b133-ff86aa15f588" containerID="ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba" exitCode=0 Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.435249 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8m4r" event={"ID":"5524172a-41d9-4206-b133-ff86aa15f588","Type":"ContainerDied","Data":"ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba"} Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.435273 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8m4r" event={"ID":"5524172a-41d9-4206-b133-ff86aa15f588","Type":"ContainerStarted","Data":"5553c64301f308bdcf58cb97ec09050a521f7793571ddd4366ad545d6776cf18"} Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.437117 4892 generic.go:334] "Generic (PLEG): container finished" podID="1c53bdc3-44ab-4be2-9f83-2d241776a337" containerID="21b0ac3e19f2d3359686e7537bb5f213fae2cfbb849e8bd5e292e7c26c2b689d" exitCode=0 Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.437141 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pjdw" event={"ID":"1c53bdc3-44ab-4be2-9f83-2d241776a337","Type":"ContainerDied","Data":"21b0ac3e19f2d3359686e7537bb5f213fae2cfbb849e8bd5e292e7c26c2b689d"} Jan 22 09:14:32 crc kubenswrapper[4892]: I0122 09:14:32.438034 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pjdw" event={"ID":"1c53bdc3-44ab-4be2-9f83-2d241776a337","Type":"ContainerStarted","Data":"4cca1680e944d23231ac15c116c7c9cfd26206cd81c16d213a12517863cc059c"} Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.442908 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pjdw" event={"ID":"1c53bdc3-44ab-4be2-9f83-2d241776a337","Type":"ContainerStarted","Data":"c419b3d6e4a414254675f27d55d1e052d8f499207742d148b05035139398f8ff"} Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.523577 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pf4pl"] Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.524599 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.527101 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.531202 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pf4pl"] Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.625865 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91b13dd5-7aad-496a-8138-9a9e638a0a01-catalog-content\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.625932 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91b13dd5-7aad-496a-8138-9a9e638a0a01-utilities\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.625966 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmxqz\" (UniqueName: \"kubernetes.io/projected/91b13dd5-7aad-496a-8138-9a9e638a0a01-kube-api-access-wmxqz\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.727056 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vrbh2"] Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.727078 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91b13dd5-7aad-496a-8138-9a9e638a0a01-utilities\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.727146 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmxqz\" (UniqueName: \"kubernetes.io/projected/91b13dd5-7aad-496a-8138-9a9e638a0a01-kube-api-access-wmxqz\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.727184 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91b13dd5-7aad-496a-8138-9a9e638a0a01-catalog-content\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.727660 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91b13dd5-7aad-496a-8138-9a9e638a0a01-catalog-content\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.727854 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91b13dd5-7aad-496a-8138-9a9e638a0a01-utilities\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.729716 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.731326 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.739887 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrbh2"] Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.747857 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmxqz\" (UniqueName: \"kubernetes.io/projected/91b13dd5-7aad-496a-8138-9a9e638a0a01-kube-api-access-wmxqz\") pod \"redhat-operators-pf4pl\" (UID: \"91b13dd5-7aad-496a-8138-9a9e638a0a01\") " pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.828332 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-utilities\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.828405 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-catalog-content\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.828429 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hm2f\" (UniqueName: \"kubernetes.io/projected/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-kube-api-access-7hm2f\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.840152 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.929412 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-catalog-content\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.929783 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hm2f\" (UniqueName: \"kubernetes.io/projected/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-kube-api-access-7hm2f\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.929876 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-utilities\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.930181 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-catalog-content\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.932382 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-utilities\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:33 crc kubenswrapper[4892]: I0122 09:14:33.945633 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hm2f\" (UniqueName: \"kubernetes.io/projected/49114a09-ac3a-4dbd-99f1-26543fbf5dcf-kube-api-access-7hm2f\") pod \"community-operators-vrbh2\" (UID: \"49114a09-ac3a-4dbd-99f1-26543fbf5dcf\") " pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.099649 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.250543 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pf4pl"] Jan 22 09:14:34 crc kubenswrapper[4892]: W0122 09:14:34.264505 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91b13dd5_7aad_496a_8138_9a9e638a0a01.slice/crio-7b12915790271ea005dc926135435040192aff50441238409f74333ab68c31ae WatchSource:0}: Error finding container 7b12915790271ea005dc926135435040192aff50441238409f74333ab68c31ae: Status 404 returned error can't find the container with id 7b12915790271ea005dc926135435040192aff50441238409f74333ab68c31ae Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.450531 4892 generic.go:334] "Generic (PLEG): container finished" podID="5524172a-41d9-4206-b133-ff86aa15f588" containerID="af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e" exitCode=0 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.450759 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8m4r" event={"ID":"5524172a-41d9-4206-b133-ff86aa15f588","Type":"ContainerDied","Data":"af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e"} Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.453090 4892 generic.go:334] "Generic (PLEG): container finished" podID="91b13dd5-7aad-496a-8138-9a9e638a0a01" containerID="32dfc7cd33c1058368cff0c3a661707717ee061fe8807a936433bbeaa4876c6e" exitCode=0 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.453312 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf4pl" event={"ID":"91b13dd5-7aad-496a-8138-9a9e638a0a01","Type":"ContainerDied","Data":"32dfc7cd33c1058368cff0c3a661707717ee061fe8807a936433bbeaa4876c6e"} Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.453354 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf4pl" event={"ID":"91b13dd5-7aad-496a-8138-9a9e638a0a01","Type":"ContainerStarted","Data":"7b12915790271ea005dc926135435040192aff50441238409f74333ab68c31ae"} Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.456306 4892 generic.go:334] "Generic (PLEG): container finished" podID="1c53bdc3-44ab-4be2-9f83-2d241776a337" containerID="c419b3d6e4a414254675f27d55d1e052d8f499207742d148b05035139398f8ff" exitCode=0 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.456340 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pjdw" event={"ID":"1c53bdc3-44ab-4be2-9f83-2d241776a337","Type":"ContainerDied","Data":"c419b3d6e4a414254675f27d55d1e052d8f499207742d148b05035139398f8ff"} Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.479876 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrbh2"] Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.870413 4892 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871397 4892 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871578 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871667 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086" gracePeriod=15 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871789 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9" gracePeriod=15 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871878 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c" gracePeriod=15 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871771 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3" gracePeriod=15 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.871808 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105" gracePeriod=15 Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875196 4892 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875406 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875428 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875440 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875447 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875457 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875463 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875474 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875480 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875487 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875493 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875505 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875510 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:14:34 crc kubenswrapper[4892]: E0122 09:14:34.875523 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875528 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875626 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875641 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875648 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875706 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.875717 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.876245 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.914106 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.942735 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.942809 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.942844 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.942864 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.942915 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.942941 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.943081 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:34 crc kubenswrapper[4892]: I0122 09:14:34.943184 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044734 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044785 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044834 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044864 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044881 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044906 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044928 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.044943 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045001 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045037 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045060 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045080 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045100 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045119 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045139 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.045157 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.214447 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:14:35 crc kubenswrapper[4892]: W0122 09:14:35.233003 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d0738c7cc47da3d16e1be261f2b4cf7f1741f25633f633ca150f4bc2a7909629 WatchSource:0}: Error finding container d0738c7cc47da3d16e1be261f2b4cf7f1741f25633f633ca150f4bc2a7909629: Status 404 returned error can't find the container with id d0738c7cc47da3d16e1be261f2b4cf7f1741f25633f633ca150f4bc2a7909629 Jan 22 09:14:35 crc kubenswrapper[4892]: E0122 09:14:35.240057 4892 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d02c558027370 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:14:35.234841456 +0000 UTC m=+245.078920519,LastTimestamp:2026-01-22 09:14:35.234841456 +0000 UTC m=+245.078920519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.466168 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pjdw" event={"ID":"1c53bdc3-44ab-4be2-9f83-2d241776a337","Type":"ContainerStarted","Data":"bccb807ec093e840820ece9c9607fb0fa68a1121fb597c5b6bd554fe01595a6e"} Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.467338 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.467685 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.471203 4892 generic.go:334] "Generic (PLEG): container finished" podID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" containerID="9ad58a3ce8e27d6c33100b310b6534fd549158b157dc78ca359ef0bfd99fac3f" exitCode=0 Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.471319 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"44ffdd66-4baa-46c3-9441-4f46b6c0c835","Type":"ContainerDied","Data":"9ad58a3ce8e27d6c33100b310b6534fd549158b157dc78ca359ef0bfd99fac3f"} Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.472247 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.472649 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.472784 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d0738c7cc47da3d16e1be261f2b4cf7f1741f25633f633ca150f4bc2a7909629"} Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.472821 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.476438 4892 generic.go:334] "Generic (PLEG): container finished" podID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" containerID="c3c34c47464948a1f3d576d17a2a297146a48e932db6d5950b08539dbd3cca56" exitCode=0 Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.476559 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbh2" event={"ID":"49114a09-ac3a-4dbd-99f1-26543fbf5dcf","Type":"ContainerDied","Data":"c3c34c47464948a1f3d576d17a2a297146a48e932db6d5950b08539dbd3cca56"} Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.476598 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbh2" event={"ID":"49114a09-ac3a-4dbd-99f1-26543fbf5dcf","Type":"ContainerStarted","Data":"2b40c319c3ea6c1e8d44f24b3f6a4f5279da046ee10611463bd654d5cdf28ab9"} Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.477329 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.477946 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.478330 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.478649 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.482573 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.484540 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.485935 4892 scope.go:117] "RemoveContainer" containerID="958bc25e110090de06886c7322add7aec8a8c8ade7906d0a571e6e88aacf8907" Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.485825 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c" exitCode=0 Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.487064 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3" exitCode=0 Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.487151 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9" exitCode=0 Jan 22 09:14:35 crc kubenswrapper[4892]: I0122 09:14:35.487228 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105" exitCode=2 Jan 22 09:14:35 crc kubenswrapper[4892]: E0122 09:14:35.935571 4892 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d02c558027370 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:14:35.234841456 +0000 UTC m=+245.078920519,LastTimestamp:2026-01-22 09:14:35.234841456 +0000 UTC m=+245.078920519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.493310 4892 generic.go:334] "Generic (PLEG): container finished" podID="91b13dd5-7aad-496a-8138-9a9e638a0a01" containerID="fec586906c6f3e3dd4c6a5c7112c6c2494ce0be62c9df6697b3f6eec2aca8bf4" exitCode=0 Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.493365 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf4pl" event={"ID":"91b13dd5-7aad-496a-8138-9a9e638a0a01","Type":"ContainerDied","Data":"fec586906c6f3e3dd4c6a5c7112c6c2494ce0be62c9df6697b3f6eec2aca8bf4"} Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.493998 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.494180 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.494418 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.494747 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.495270 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.496454 4892 generic.go:334] "Generic (PLEG): container finished" podID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" containerID="1bd26cac186ab02864bdc958d27a2b6316d1769f0b95663c7ef3309fecd81148" exitCode=0 Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.496507 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbh2" event={"ID":"49114a09-ac3a-4dbd-99f1-26543fbf5dcf","Type":"ContainerDied","Data":"1bd26cac186ab02864bdc958d27a2b6316d1769f0b95663c7ef3309fecd81148"} Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.497057 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.497483 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.497715 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.497977 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.498500 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.501384 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.507276 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b0d5d3f120eafae729adfbba9cd4dbce65de0872160cf71bbfab2d3c0b417e3d"} Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.508181 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.508355 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.508497 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.509619 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.509967 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.510638 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8m4r" event={"ID":"5524172a-41d9-4206-b133-ff86aa15f588","Type":"ContainerStarted","Data":"402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c"} Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.511223 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.511451 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.511597 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.511733 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.512575 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.512854 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.765244 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.766274 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.766567 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.766789 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.766999 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.767214 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.767478 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865369 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kube-api-access\") pod \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865521 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-var-lock\") pod \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865552 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kubelet-dir\") pod \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\" (UID: \"44ffdd66-4baa-46c3-9441-4f46b6c0c835\") " Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865669 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-var-lock" (OuterVolumeSpecName: "var-lock") pod "44ffdd66-4baa-46c3-9441-4f46b6c0c835" (UID: "44ffdd66-4baa-46c3-9441-4f46b6c0c835"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865728 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "44ffdd66-4baa-46c3-9441-4f46b6c0c835" (UID: "44ffdd66-4baa-46c3-9441-4f46b6c0c835"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865966 4892 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.865987 4892 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.870616 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "44ffdd66-4baa-46c3-9441-4f46b6c0c835" (UID: "44ffdd66-4baa-46c3-9441-4f46b6c0c835"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:36 crc kubenswrapper[4892]: I0122 09:14:36.968543 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ffdd66-4baa-46c3-9441-4f46b6c0c835-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.227696 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.229588 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.230187 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.230462 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.230729 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.231180 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.231439 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.231721 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.231965 4892 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.272449 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.272546 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.272588 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.272608 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.272642 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.272664 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.273045 4892 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.273066 4892 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.273075 4892 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.428459 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.519415 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrbh2" event={"ID":"49114a09-ac3a-4dbd-99f1-26543fbf5dcf","Type":"ContainerStarted","Data":"0fe83c4585dd8386d9c562b074faacbc1ce8fbb206147baff7f98937e6a2ace1"} Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.520086 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.520322 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.520618 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.520847 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.521080 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.521330 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.521963 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf4pl" event={"ID":"91b13dd5-7aad-496a-8138-9a9e638a0a01","Type":"ContainerStarted","Data":"16cd2284874f054606f8c582ab9f574795dd195067dd0343f590ee7c7e3ca2d1"} Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.522484 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.522934 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.523355 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.523713 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.524035 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.524334 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.525866 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.526593 4892 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086" exitCode=0 Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.526669 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.526711 4892 scope.go:117] "RemoveContainer" containerID="f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.527621 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.528017 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.528110 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"44ffdd66-4baa-46c3-9441-4f46b6c0c835","Type":"ContainerDied","Data":"6bcdf144206f9c7a99a341c876aa50b70e60247cd092a3cd1c6aad6fc485c2db"} Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.528133 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcdf144206f9c7a99a341c876aa50b70e60247cd092a3cd1c6aad6fc485c2db" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.528451 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.529048 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.529337 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.529611 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.529841 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.530051 4892 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.530424 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.531244 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.531550 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.531800 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532028 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532169 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532333 4892 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532539 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532687 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532827 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.532974 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.533113 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.533251 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.533442 4892 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.539589 4892 scope.go:117] "RemoveContainer" containerID="0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.555423 4892 scope.go:117] "RemoveContainer" containerID="91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.571435 4892 scope.go:117] "RemoveContainer" containerID="d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.600267 4892 scope.go:117] "RemoveContainer" containerID="8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.615981 4892 scope.go:117] "RemoveContainer" containerID="7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.632524 4892 scope.go:117] "RemoveContainer" containerID="f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c" Jan 22 09:14:37 crc kubenswrapper[4892]: E0122 09:14:37.632940 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\": container with ID starting with f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c not found: ID does not exist" containerID="f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.632977 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c"} err="failed to get container status \"f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\": rpc error: code = NotFound desc = could not find container \"f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c\": container with ID starting with f872dd40675a61bce98602758d30299cd2dbc1a466e25c96e1ec70990fa8261c not found: ID does not exist" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.633006 4892 scope.go:117] "RemoveContainer" containerID="0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3" Jan 22 09:14:37 crc kubenswrapper[4892]: E0122 09:14:37.633353 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\": container with ID starting with 0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3 not found: ID does not exist" containerID="0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.633381 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3"} err="failed to get container status \"0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\": rpc error: code = NotFound desc = could not find container \"0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3\": container with ID starting with 0909fbf375e6d800e7929540c72f800acf4c7646e4a09cdd95b3a5327dcc0aa3 not found: ID does not exist" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.633397 4892 scope.go:117] "RemoveContainer" containerID="91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9" Jan 22 09:14:37 crc kubenswrapper[4892]: E0122 09:14:37.633681 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\": container with ID starting with 91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9 not found: ID does not exist" containerID="91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.633701 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9"} err="failed to get container status \"91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\": rpc error: code = NotFound desc = could not find container \"91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9\": container with ID starting with 91cedd7e11addf6cf6416087a03723719b56340f48fde7d8d65279cd8c59b1e9 not found: ID does not exist" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.633717 4892 scope.go:117] "RemoveContainer" containerID="d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105" Jan 22 09:14:37 crc kubenswrapper[4892]: E0122 09:14:37.634069 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\": container with ID starting with d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105 not found: ID does not exist" containerID="d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.634096 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105"} err="failed to get container status \"d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\": rpc error: code = NotFound desc = could not find container \"d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105\": container with ID starting with d813262d343ecf17e0d9ecd07cab015f76c5ad16f838e421d9830c696e478105 not found: ID does not exist" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.634114 4892 scope.go:117] "RemoveContainer" containerID="8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086" Jan 22 09:14:37 crc kubenswrapper[4892]: E0122 09:14:37.634938 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\": container with ID starting with 8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086 not found: ID does not exist" containerID="8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.634965 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086"} err="failed to get container status \"8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\": rpc error: code = NotFound desc = could not find container \"8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086\": container with ID starting with 8338cd882dadcb6393b7af4390da017cd7f309f36fc44542d712470f0a37f086 not found: ID does not exist" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.634981 4892 scope.go:117] "RemoveContainer" containerID="7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869" Jan 22 09:14:37 crc kubenswrapper[4892]: E0122 09:14:37.635303 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\": container with ID starting with 7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869 not found: ID does not exist" containerID="7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869" Jan 22 09:14:37 crc kubenswrapper[4892]: I0122 09:14:37.635327 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869"} err="failed to get container status \"7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\": rpc error: code = NotFound desc = could not find container \"7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869\": container with ID starting with 7a31f501ec05acf29321755efae75d4b5d8a7eea71c801f7b50952ef2ad41869 not found: ID does not exist" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.126372 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.127017 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.127895 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.128236 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.128530 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.128914 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.129356 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.129662 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: I0122 09:14:39.129948 4892 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:39 crc kubenswrapper[4892]: E0122 09:14:39.149988 4892 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" volumeName="registry-storage" Jan 22 09:14:40 crc kubenswrapper[4892]: E0122 09:14:40.465748 4892 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" volumeName="registry-storage" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.421018 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.421328 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.421662 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.422029 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.422440 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.422685 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.423608 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.465381 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.465428 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.514341 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.515053 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.515338 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.515613 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.515935 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.516222 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.516499 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.516899 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.593051 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7pjdw" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.593583 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.593785 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.593971 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.594277 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.594597 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.594812 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.595018 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.658136 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.658312 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.695467 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.696025 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.696417 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.697066 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.697596 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.697840 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.698178 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:41 crc kubenswrapper[4892]: I0122 09:14:41.698465 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.600880 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.601529 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.602018 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.602914 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.603277 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.603583 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.603843 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4892]: I0122 09:14:42.604181 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.840754 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.840832 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.885931 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.886732 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.887405 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.887799 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.888397 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.888718 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.888994 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.889246 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: E0122 09:14:43.930253 4892 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: E0122 09:14:43.930779 4892 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: E0122 09:14:43.931055 4892 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: E0122 09:14:43.931245 4892 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: E0122 09:14:43.931456 4892 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:43 crc kubenswrapper[4892]: I0122 09:14:43.931482 4892 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 09:14:43 crc kubenswrapper[4892]: E0122 09:14:43.931629 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="200ms" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.100217 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.100305 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:44 crc kubenswrapper[4892]: E0122 09:14:44.132987 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="400ms" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.139300 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.139879 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.140516 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.140785 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.141188 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.141647 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.142019 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.142392 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: E0122 09:14:44.534173 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="800ms" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.609728 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pf4pl" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.610390 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.610914 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.611361 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.611683 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.612409 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.612643 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.612855 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.618836 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vrbh2" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.619421 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.619834 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.620171 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.620522 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.620792 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.621055 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:44 crc kubenswrapper[4892]: I0122 09:14:44.621373 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: E0122 09:14:45.334888 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="1.6s" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.418265 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.418752 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.418999 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.419255 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.419530 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.419760 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.420048 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.420416 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.432826 4892 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.432873 4892 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:45 crc kubenswrapper[4892]: E0122 09:14:45.433308 4892 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.433823 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:45 crc kubenswrapper[4892]: I0122 09:14:45.578615 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29a9f8bbe799393dd75095851b0c54c60fddc674ffa2f17187220bee2b9d3d46"} Jan 22 09:14:45 crc kubenswrapper[4892]: E0122 09:14:45.937167 4892 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d02c558027370 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:14:35.234841456 +0000 UTC m=+245.078920519,LastTimestamp:2026-01-22 09:14:35.234841456 +0000 UTC m=+245.078920519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:14:46 crc kubenswrapper[4892]: I0122 09:14:46.583722 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a623f577f1f96a79245f5f0da3a894df6e9d8be9315787f285f1707d7ef17c0"} Jan 22 09:14:46 crc kubenswrapper[4892]: E0122 09:14:46.935916 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="3.2s" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.594055 4892 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.594091 4892 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.595088 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:47 crc kubenswrapper[4892]: E0122 09:14:47.595189 4892 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.595746 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.596240 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.596547 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.597025 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.597417 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:47 crc kubenswrapper[4892]: I0122 09:14:47.597804 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.599164 4892 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5a623f577f1f96a79245f5f0da3a894df6e9d8be9315787f285f1707d7ef17c0" exitCode=0 Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.599208 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5a623f577f1f96a79245f5f0da3a894df6e9d8be9315787f285f1707d7ef17c0"} Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.599433 4892 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.599444 4892 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.599964 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: E0122 09:14:48.600107 4892 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.600159 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.600333 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.600492 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.600701 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.601229 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:48 crc kubenswrapper[4892]: I0122 09:14:48.601446 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:49 crc kubenswrapper[4892]: I0122 09:14:49.303947 4892 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 09:14:49 crc kubenswrapper[4892]: I0122 09:14:49.304378 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: E0122 09:14:50.137387 4892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="6.4s" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.613050 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.613357 4892 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283" exitCode=1 Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.613423 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283"} Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.614182 4892 scope.go:117] "RemoveContainer" containerID="cfdc43a981bcc1513e43690abae207ca87d9884149cfb1f537c07d656e872283" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.614376 4892 status_manager.go:851] "Failed to get status for pod" podUID="49114a09-ac3a-4dbd-99f1-26543fbf5dcf" pod="openshift-marketplace/community-operators-vrbh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vrbh2\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.615029 4892 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.615610 4892 status_manager.go:851] "Failed to get status for pod" podUID="91b13dd5-7aad-496a-8138-9a9e638a0a01" pod="openshift-marketplace/redhat-operators-pf4pl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pf4pl\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.615852 4892 status_manager.go:851] "Failed to get status for pod" podUID="5524172a-41d9-4206-b133-ff86aa15f588" pod="openshift-marketplace/certified-operators-p8m4r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p8m4r\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.616075 4892 status_manager.go:851] "Failed to get status for pod" podUID="1c53bdc3-44ab-4be2-9f83-2d241776a337" pod="openshift-marketplace/redhat-marketplace-7pjdw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pjdw\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.616332 4892 status_manager.go:851] "Failed to get status for pod" podUID="0191742c-4602-476c-9a6a-52600797194a" pod="openshift-image-registry/image-registry-66df7c8f76-4rzs7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4rzs7\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.617384 4892 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.617558 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f33af2a1d45067692fa98670dea100137a5c9d4ce53a7ae1cd57c52de8e24366"} Jan 22 09:14:50 crc kubenswrapper[4892]: I0122 09:14:50.617605 4892 status_manager.go:851] "Failed to get status for pod" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.536061 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" podUID="3a357ae9-5621-4063-b475-508269240d98" containerName="oauth-openshift" containerID="cri-o://fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671" gracePeriod=15 Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.626331 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.626435 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d2d15d7629881a7b1ecaa25990a682f26e3f93ebff35ac1e3f2b0baeb92dea32"} Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.628958 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"61f06e0a25a52fef4092bb08ce94fbc8cdc1e0dfc7964e4dc11d81d812182f78"} Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.628991 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d5712986596d681fca7a8006417df46f7c6467386e29f0624007b88d20e09274"} Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.629005 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1855c1e58a5381e311a013f36e3dd7530630851f692aba259e39169484aebc65"} Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.629016 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc4aa6d46001434f25795976d39fe1a97c6f19a1075b90defcf2efc5226e7657"} Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.629190 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.629234 4892 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:51 crc kubenswrapper[4892]: I0122 09:14:51.629258 4892 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.080951 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199037 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-audit-policies\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199085 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a357ae9-5621-4063-b475-508269240d98-audit-dir\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199117 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-session\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199142 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-trusted-ca-bundle\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199171 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-error\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199200 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-login\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199216 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t67kf\" (UniqueName: \"kubernetes.io/projected/3a357ae9-5621-4063-b475-508269240d98-kube-api-access-t67kf\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199239 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-serving-cert\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199257 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-cliconfig\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199323 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-provider-selection\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199358 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-router-certs\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199375 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-ocp-branding-template\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199404 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-idp-0-file-data\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.199422 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-service-ca\") pod \"3a357ae9-5621-4063-b475-508269240d98\" (UID: \"3a357ae9-5621-4063-b475-508269240d98\") " Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.200365 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.200612 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.200642 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a357ae9-5621-4063-b475-508269240d98-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.208243 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.208483 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.220497 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.221130 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a357ae9-5621-4063-b475-508269240d98-kube-api-access-t67kf" (OuterVolumeSpecName: "kube-api-access-t67kf") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "kube-api-access-t67kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.221242 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.221541 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.221803 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.222040 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.222575 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.222934 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.223074 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3a357ae9-5621-4063-b475-508269240d98" (UID: "3a357ae9-5621-4063-b475-508269240d98"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.300986 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301734 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t67kf\" (UniqueName: \"kubernetes.io/projected/3a357ae9-5621-4063-b475-508269240d98-kube-api-access-t67kf\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301772 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301784 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301798 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301827 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301837 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301848 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301857 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301869 4892 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301880 4892 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a357ae9-5621-4063-b475-508269240d98-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301889 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301899 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.301910 4892 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a357ae9-5621-4063-b475-508269240d98-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.634324 4892 generic.go:334] "Generic (PLEG): container finished" podID="3a357ae9-5621-4063-b475-508269240d98" containerID="fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671" exitCode=0 Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.634380 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" event={"ID":"3a357ae9-5621-4063-b475-508269240d98","Type":"ContainerDied","Data":"fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671"} Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.634676 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" event={"ID":"3a357ae9-5621-4063-b475-508269240d98","Type":"ContainerDied","Data":"ab956946fd34e19c4a008613073246725924b9e5f4bde980e711c1d5a623756b"} Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.634695 4892 scope.go:117] "RemoveContainer" containerID="fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.634433 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j4vqv" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.652992 4892 scope.go:117] "RemoveContainer" containerID="fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671" Jan 22 09:14:52 crc kubenswrapper[4892]: E0122 09:14:52.653414 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671\": container with ID starting with fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671 not found: ID does not exist" containerID="fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671" Jan 22 09:14:52 crc kubenswrapper[4892]: I0122 09:14:52.653445 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671"} err="failed to get container status \"fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671\": rpc error: code = NotFound desc = could not find container \"fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671\": container with ID starting with fcdcea72dd4d085f9ee81bb55b22d1859fecb8fb11958c3c011738f6720bb671 not found: ID does not exist" Jan 22 09:14:55 crc kubenswrapper[4892]: I0122 09:14:55.434505 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:55 crc kubenswrapper[4892]: I0122 09:14:55.435023 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:55 crc kubenswrapper[4892]: I0122 09:14:55.439121 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.020177 4892 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.092708 4892 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1d7fd50f-586a-4822-82f3-f111b623f55e" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.488522 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.488615 4892 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.488665 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 09:14:57 crc kubenswrapper[4892]: E0122 09:14:57.537962 4892 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.660111 4892 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.660136 4892 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ac3cd91-e665-45d1-abbd-2d45b4392193" Jan 22 09:14:57 crc kubenswrapper[4892]: I0122 09:14:57.662564 4892 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1d7fd50f-586a-4822-82f3-f111b623f55e" Jan 22 09:14:59 crc kubenswrapper[4892]: I0122 09:14:59.303788 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:15:06 crc kubenswrapper[4892]: I0122 09:15:06.581523 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 09:15:07 crc kubenswrapper[4892]: I0122 09:15:07.257591 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 09:15:07 crc kubenswrapper[4892]: I0122 09:15:07.359990 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 09:15:07 crc kubenswrapper[4892]: I0122 09:15:07.631728 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:15:07 crc kubenswrapper[4892]: I0122 09:15:07.640145 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:15:07 crc kubenswrapper[4892]: I0122 09:15:07.875146 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.068654 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.157578 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.248726 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.306570 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.354682 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.557481 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.596847 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.655908 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 09:15:08 crc kubenswrapper[4892]: I0122 09:15:08.792387 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.023010 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.393944 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.541608 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.555110 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.559277 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.573901 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.601416 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.613382 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.645096 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.719162 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.730473 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 09:15:09 crc kubenswrapper[4892]: I0122 09:15:09.746885 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.070374 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.077941 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.117329 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.150925 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.155756 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.260014 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.486638 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.544843 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.617959 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.623623 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.674059 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.754555 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.776775 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.802763 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.812150 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.867072 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 09:15:10 crc kubenswrapper[4892]: I0122 09:15:10.908574 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.070778 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.189262 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.412930 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.484762 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.500270 4892 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.522840 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.606776 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.614370 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.649371 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.716171 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.753330 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.758747 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.795905 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.851897 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.885104 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.924069 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.991171 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 09:15:11 crc kubenswrapper[4892]: I0122 09:15:11.994680 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.029369 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.147847 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.175256 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.229924 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.236412 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.282252 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.314225 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.423065 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.431434 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.503840 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.564089 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.623531 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.635558 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.677241 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.682773 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.729344 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.770022 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.816127 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.838245 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.849614 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 09:15:12 crc kubenswrapper[4892]: I0122 09:15:12.880366 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.007203 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.019244 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.031819 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.044616 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.065870 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.279109 4892 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.284198 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7pjdw" podStartSLOduration=39.391513239 podStartE2EDuration="42.284172691s" podCreationTimestamp="2026-01-22 09:14:31 +0000 UTC" firstStartedPulling="2026-01-22 09:14:32.439855773 +0000 UTC m=+242.283934856" lastFinishedPulling="2026-01-22 09:14:35.332515245 +0000 UTC m=+245.176594308" observedRunningTime="2026-01-22 09:14:57.067992146 +0000 UTC m=+266.912071219" watchObservedRunningTime="2026-01-22 09:15:13.284172691 +0000 UTC m=+283.128251794" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.284460 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pf4pl" podStartSLOduration=37.69684568 podStartE2EDuration="40.284451938s" podCreationTimestamp="2026-01-22 09:14:33 +0000 UTC" firstStartedPulling="2026-01-22 09:14:34.454901842 +0000 UTC m=+244.298980915" lastFinishedPulling="2026-01-22 09:14:37.0425081 +0000 UTC m=+246.886587173" observedRunningTime="2026-01-22 09:14:57.042140175 +0000 UTC m=+266.886219238" watchObservedRunningTime="2026-01-22 09:15:13.284451938 +0000 UTC m=+283.128531041" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.285073 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.285063033 podStartE2EDuration="39.285063033s" podCreationTimestamp="2026-01-22 09:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:14:57.104391447 +0000 UTC m=+266.948470510" watchObservedRunningTime="2026-01-22 09:15:13.285063033 +0000 UTC m=+283.129142126" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.286123 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p8m4r" podStartSLOduration=39.325691125 podStartE2EDuration="42.28611075s" podCreationTimestamp="2026-01-22 09:14:31 +0000 UTC" firstStartedPulling="2026-01-22 09:14:32.437039998 +0000 UTC m=+242.281119061" lastFinishedPulling="2026-01-22 09:14:35.397459623 +0000 UTC m=+245.241538686" observedRunningTime="2026-01-22 09:14:57.054497821 +0000 UTC m=+266.898576904" watchObservedRunningTime="2026-01-22 09:15:13.28611075 +0000 UTC m=+283.130189853" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.286259 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vrbh2" podStartSLOduration=38.858520513 podStartE2EDuration="40.286252224s" podCreationTimestamp="2026-01-22 09:14:33 +0000 UTC" firstStartedPulling="2026-01-22 09:14:35.481519264 +0000 UTC m=+245.325598327" lastFinishedPulling="2026-01-22 09:14:36.909250975 +0000 UTC m=+246.753330038" observedRunningTime="2026-01-22 09:14:57.145745824 +0000 UTC m=+266.989824887" watchObservedRunningTime="2026-01-22 09:15:13.286252224 +0000 UTC m=+283.130331317" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.287159 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j4vqv","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.287231 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.296249 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.296864 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.316419 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.316389785 podStartE2EDuration="16.316389785s" podCreationTimestamp="2026-01-22 09:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:15:13.31036558 +0000 UTC m=+283.154444643" watchObservedRunningTime="2026-01-22 09:15:13.316389785 +0000 UTC m=+283.160468848" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.324842 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.335626 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.362847 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.397912 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.416000 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.420962 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.425362 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a357ae9-5621-4063-b475-508269240d98" path="/var/lib/kubelet/pods/3a357ae9-5621-4063-b475-508269240d98/volumes" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.511755 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.518348 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.585909 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.595215 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.623517 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.689875 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.763956 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.820732 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 09:15:13 crc kubenswrapper[4892]: I0122 09:15:13.991636 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.066062 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.165235 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.179076 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.201042 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.232232 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.263842 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.303993 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.324556 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.335368 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.387511 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.481679 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.488089 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.572688 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.601354 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.714838 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.750109 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.868619 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:15:14 crc kubenswrapper[4892]: I0122 09:15:14.915247 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.020725 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.153071 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.178415 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.289232 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.449937 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.456431 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.458625 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.552528 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.621836 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.633140 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.749696 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.792937 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.819113 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.946267 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 09:15:15 crc kubenswrapper[4892]: I0122 09:15:15.979640 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.034794 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.266387 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.327599 4892 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.381169 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.406971 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.419790 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.451977 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.455801 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.478402 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.499348 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.525898 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.653658 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.662641 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.736202 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.778481 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.784754 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 09:15:16 crc kubenswrapper[4892]: I0122 09:15:16.890633 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.093680 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.180071 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.247570 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.284560 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.313822 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.347034 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.417374 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.432257 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.456894 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.470713 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.517473 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.593942 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.616084 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.630713 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.679830 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.891638 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.934341 4892 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.958518 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.976867 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.981391 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 09:15:17 crc kubenswrapper[4892]: I0122 09:15:17.994336 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.023066 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.151625 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.216174 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.224101 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.270018 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.279074 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.280409 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.381018 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.464237 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.484664 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.492218 4892 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.492461 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b0d5d3f120eafae729adfbba9cd4dbce65de0872160cf71bbfab2d3c0b417e3d" gracePeriod=5 Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.587966 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.638544 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.682546 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.698128 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.749233 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.816381 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 09:15:18 crc kubenswrapper[4892]: I0122 09:15:18.873126 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.151698 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.209580 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.313390 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.317898 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.391300 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.437920 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.670619 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.799180 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.819956 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.869128 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.871081 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.881995 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.963402 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.979996 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 09:15:19 crc kubenswrapper[4892]: I0122 09:15:19.983789 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.109225 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.113118 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.174568 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.259044 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.270958 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.284123 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.400613 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.422319 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.456276 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.489932 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.526612 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.721117 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.889371 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.929170 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 09:15:20 crc kubenswrapper[4892]: I0122 09:15:20.994313 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.011271 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.234060 4892 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.247188 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.312146 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.320476 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.552940 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.579717 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.751991 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 09:15:21 crc kubenswrapper[4892]: I0122 09:15:21.915145 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 09:15:22 crc kubenswrapper[4892]: I0122 09:15:22.082500 4892 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 09:15:22 crc kubenswrapper[4892]: I0122 09:15:22.121087 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 09:15:22 crc kubenswrapper[4892]: I0122 09:15:22.293700 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 09:15:23 crc kubenswrapper[4892]: I0122 09:15:23.133013 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 09:15:23 crc kubenswrapper[4892]: I0122 09:15:23.784697 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 09:15:23 crc kubenswrapper[4892]: I0122 09:15:23.784994 4892 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b0d5d3f120eafae729adfbba9cd4dbce65de0872160cf71bbfab2d3c0b417e3d" exitCode=137 Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.046974 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.075214 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.075306 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152636 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152708 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152751 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152777 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152794 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152855 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152942 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.152991 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.153036 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.153149 4892 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.153166 4892 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.153176 4892 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.153185 4892 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.163012 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.253841 4892 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.318186 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.789648 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.789700 4892 scope.go:117] "RemoveContainer" containerID="b0d5d3f120eafae729adfbba9cd4dbce65de0872160cf71bbfab2d3c0b417e3d" Jan 22 09:15:24 crc kubenswrapper[4892]: I0122 09:15:24.789793 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:15:25 crc kubenswrapper[4892]: I0122 09:15:25.426051 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 09:15:25 crc kubenswrapper[4892]: I0122 09:15:25.426777 4892 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 09:15:25 crc kubenswrapper[4892]: I0122 09:15:25.437843 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:15:25 crc kubenswrapper[4892]: I0122 09:15:25.437915 4892 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="7f0cb62c-8f17-49a0-aec2-ffad530131c3" Jan 22 09:15:25 crc kubenswrapper[4892]: I0122 09:15:25.442260 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:15:25 crc kubenswrapper[4892]: I0122 09:15:25.442321 4892 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="7f0cb62c-8f17-49a0-aec2-ffad530131c3" Jan 22 09:15:26 crc kubenswrapper[4892]: I0122 09:15:26.250072 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vbm7b"] Jan 22 09:15:30 crc kubenswrapper[4892]: I0122 09:15:30.525051 4892 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.552751 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7874f76df5-zzj6b"] Jan 22 09:15:47 crc kubenswrapper[4892]: E0122 09:15:47.553649 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.553661 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 09:15:47 crc kubenswrapper[4892]: E0122 09:15:47.553668 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a357ae9-5621-4063-b475-508269240d98" containerName="oauth-openshift" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.553675 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a357ae9-5621-4063-b475-508269240d98" containerName="oauth-openshift" Jan 22 09:15:47 crc kubenswrapper[4892]: E0122 09:15:47.553683 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" containerName="installer" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.553689 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" containerName="installer" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.553787 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ffdd66-4baa-46c3-9441-4f46b6c0c835" containerName="installer" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.553800 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.553811 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a357ae9-5621-4063-b475-508269240d98" containerName="oauth-openshift" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.554254 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.555800 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7"] Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.556414 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.556563 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.556604 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.556974 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.557459 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.559388 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.559762 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.559919 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.560092 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.561713 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.561754 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.561880 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.562732 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.563015 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.565918 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.566268 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.570724 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7874f76df5-zzj6b"] Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.571452 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.578943 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.586927 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7"] Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.668782 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.668858 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-config-volume\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.668891 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-login\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.668917 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgrb\" (UniqueName: \"kubernetes.io/projected/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-kube-api-access-fbgrb\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.668941 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-router-certs\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.668972 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-session\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669006 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-secret-volume\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669029 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-service-ca\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669053 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-audit-policies\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669157 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669202 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44b31972-0a55-4eea-824d-7efb9c356218-audit-dir\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669227 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669252 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669300 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669327 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669359 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxx5h\" (UniqueName: \"kubernetes.io/projected/44b31972-0a55-4eea-824d-7efb9c356218-kube-api-access-kxx5h\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.669446 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-error\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773346 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-service-ca\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773391 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-audit-policies\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773413 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773432 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44b31972-0a55-4eea-824d-7efb9c356218-audit-dir\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773448 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773464 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773493 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773514 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773534 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxx5h\" (UniqueName: \"kubernetes.io/projected/44b31972-0a55-4eea-824d-7efb9c356218-kube-api-access-kxx5h\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773562 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-error\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773587 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773618 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-config-volume\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773639 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-login\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773657 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgrb\" (UniqueName: \"kubernetes.io/projected/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-kube-api-access-fbgrb\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773672 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-router-certs\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773696 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-session\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.773731 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-secret-volume\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.774481 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.774594 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.774704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44b31972-0a55-4eea-824d-7efb9c356218-audit-dir\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.775811 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-service-ca\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.776217 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-config-volume\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.777183 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44b31972-0a55-4eea-824d-7efb9c356218-audit-policies\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.785690 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-login\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.786064 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-session\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.786670 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.787006 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-template-error\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.786952 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.786988 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.786797 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-router-certs\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.787181 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/44b31972-0a55-4eea-824d-7efb9c356218-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.787649 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-secret-volume\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.797585 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgrb\" (UniqueName: \"kubernetes.io/projected/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-kube-api-access-fbgrb\") pod \"collect-profiles-29484555-4s6c7\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.801385 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxx5h\" (UniqueName: \"kubernetes.io/projected/44b31972-0a55-4eea-824d-7efb9c356218-kube-api-access-kxx5h\") pod \"oauth-openshift-7874f76df5-zzj6b\" (UID: \"44b31972-0a55-4eea-824d-7efb9c356218\") " pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.875459 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:47 crc kubenswrapper[4892]: I0122 09:15:47.895388 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.103775 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7"] Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.260946 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7874f76df5-zzj6b"] Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.836310 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtrx4"] Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.836928 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" podUID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" containerName="controller-manager" containerID="cri-o://f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1" gracePeriod=30 Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.932599 4892 generic.go:334] "Generic (PLEG): container finished" podID="f7608641-d2e1-4f1f-9fce-cbc081c61ce9" containerID="7f85f23900519e580bb95911a03bb89a8179e3d58a544b7fb16a16b6dad241dc" exitCode=0 Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.932692 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" event={"ID":"f7608641-d2e1-4f1f-9fce-cbc081c61ce9","Type":"ContainerDied","Data":"7f85f23900519e580bb95911a03bb89a8179e3d58a544b7fb16a16b6dad241dc"} Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.932732 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" event={"ID":"f7608641-d2e1-4f1f-9fce-cbc081c61ce9","Type":"ContainerStarted","Data":"0298d8aa9cb203d7a0e7c47a37ffa5b358af714facbbd027a856174967172343"} Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.933556 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt"] Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.933712 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" podUID="a91f44ce-a5d5-4379-a443-c61626f142f7" containerName="route-controller-manager" containerID="cri-o://9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb" gracePeriod=30 Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.934058 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" event={"ID":"44b31972-0a55-4eea-824d-7efb9c356218","Type":"ContainerStarted","Data":"90a8b61c3b384cc3d9e04b4b7876299763218abe0c0496806999a86aa303cfcc"} Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.934083 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" event={"ID":"44b31972-0a55-4eea-824d-7efb9c356218","Type":"ContainerStarted","Data":"f413f06d2c854901e0258e728643b1dfa0ceb5efa79a58a66bc1824c4afbda15"} Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.934267 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.950676 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" Jan 22 09:15:48 crc kubenswrapper[4892]: I0122 09:15:48.973129 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7874f76df5-zzj6b" podStartSLOduration=82.973107299 podStartE2EDuration="1m22.973107299s" podCreationTimestamp="2026-01-22 09:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:15:48.971771475 +0000 UTC m=+318.815850548" watchObservedRunningTime="2026-01-22 09:15:48.973107299 +0000 UTC m=+318.817186362" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.201473 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.238615 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.299595 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-serving-cert\") pod \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.299637 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9rwv\" (UniqueName: \"kubernetes.io/projected/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-kube-api-access-k9rwv\") pod \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.299730 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-config\") pod \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.299747 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-client-ca\") pod \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.299771 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-proxy-ca-bundles\") pod \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\" (UID: \"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.300748 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-config" (OuterVolumeSpecName: "config") pod "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" (UID: "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.300760 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-client-ca" (OuterVolumeSpecName: "client-ca") pod "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" (UID: "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.300949 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" (UID: "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.304734 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" (UID: "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.306003 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-kube-api-access-k9rwv" (OuterVolumeSpecName: "kube-api-access-k9rwv") pod "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" (UID: "97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97"). InnerVolumeSpecName "kube-api-access-k9rwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.400865 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-config\") pod \"a91f44ce-a5d5-4379-a443-c61626f142f7\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.400968 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsd6n\" (UniqueName: \"kubernetes.io/projected/a91f44ce-a5d5-4379-a443-c61626f142f7-kube-api-access-rsd6n\") pod \"a91f44ce-a5d5-4379-a443-c61626f142f7\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.400999 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91f44ce-a5d5-4379-a443-c61626f142f7-serving-cert\") pod \"a91f44ce-a5d5-4379-a443-c61626f142f7\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.401032 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-client-ca\") pod \"a91f44ce-a5d5-4379-a443-c61626f142f7\" (UID: \"a91f44ce-a5d5-4379-a443-c61626f142f7\") " Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.401882 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "a91f44ce-a5d5-4379-a443-c61626f142f7" (UID: "a91f44ce-a5d5-4379-a443-c61626f142f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402001 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-config" (OuterVolumeSpecName: "config") pod "a91f44ce-a5d5-4379-a443-c61626f142f7" (UID: "a91f44ce-a5d5-4379-a443-c61626f142f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402177 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402194 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402203 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402211 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9rwv\" (UniqueName: \"kubernetes.io/projected/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-kube-api-access-k9rwv\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402220 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91f44ce-a5d5-4379-a443-c61626f142f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402227 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.402235 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.404170 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91f44ce-a5d5-4379-a443-c61626f142f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a91f44ce-a5d5-4379-a443-c61626f142f7" (UID: "a91f44ce-a5d5-4379-a443-c61626f142f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.405085 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91f44ce-a5d5-4379-a443-c61626f142f7-kube-api-access-rsd6n" (OuterVolumeSpecName: "kube-api-access-rsd6n") pod "a91f44ce-a5d5-4379-a443-c61626f142f7" (UID: "a91f44ce-a5d5-4379-a443-c61626f142f7"). InnerVolumeSpecName "kube-api-access-rsd6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.445378 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.503410 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsd6n\" (UniqueName: \"kubernetes.io/projected/a91f44ce-a5d5-4379-a443-c61626f142f7-kube-api-access-rsd6n\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.503438 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91f44ce-a5d5-4379-a443-c61626f142f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.944013 4892 generic.go:334] "Generic (PLEG): container finished" podID="a91f44ce-a5d5-4379-a443-c61626f142f7" containerID="9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb" exitCode=0 Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.944083 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" event={"ID":"a91f44ce-a5d5-4379-a443-c61626f142f7","Type":"ContainerDied","Data":"9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb"} Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.944111 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" event={"ID":"a91f44ce-a5d5-4379-a443-c61626f142f7","Type":"ContainerDied","Data":"1ab0fb8ab5537520477c976647d4fe6f808f1ba50a7d9229f5caa47f718e908e"} Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.944127 4892 scope.go:117] "RemoveContainer" containerID="9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.944165 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.945758 4892 generic.go:334] "Generic (PLEG): container finished" podID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" containerID="f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1" exitCode=0 Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.945859 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.945872 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" event={"ID":"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97","Type":"ContainerDied","Data":"f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1"} Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.945890 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtrx4" event={"ID":"97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97","Type":"ContainerDied","Data":"5801c2b5cb6af330bc0dab2f3d75f8e27e2136c2980b847016b820ec1cf86f99"} Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.972053 4892 scope.go:117] "RemoveContainer" containerID="9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb" Jan 22 09:15:49 crc kubenswrapper[4892]: E0122 09:15:49.973652 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb\": container with ID starting with 9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb not found: ID does not exist" containerID="9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.973828 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb"} err="failed to get container status \"9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb\": rpc error: code = NotFound desc = could not find container \"9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb\": container with ID starting with 9e039cf63a021887089e8d3b4bc3b58b069110c637c228054419a06bcd52d9eb not found: ID does not exist" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.973861 4892 scope.go:117] "RemoveContainer" containerID="f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.977428 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt"] Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.985095 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl"] Jan 22 09:15:49 crc kubenswrapper[4892]: E0122 09:15:49.985315 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" containerName="controller-manager" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.985331 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" containerName="controller-manager" Jan 22 09:15:49 crc kubenswrapper[4892]: E0122 09:15:49.985343 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91f44ce-a5d5-4379-a443-c61626f142f7" containerName="route-controller-manager" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.985350 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91f44ce-a5d5-4379-a443-c61626f142f7" containerName="route-controller-manager" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.985514 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" containerName="controller-manager" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.985539 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91f44ce-a5d5-4379-a443-c61626f142f7" containerName="route-controller-manager" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.985922 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.991625 4892 scope.go:117] "RemoveContainer" containerID="f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1" Jan 22 09:15:49 crc kubenswrapper[4892]: E0122 09:15:49.992010 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1\": container with ID starting with f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1 not found: ID does not exist" containerID="f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.992060 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1"} err="failed to get container status \"f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1\": rpc error: code = NotFound desc = could not find container \"f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1\": container with ID starting with f28d27a6cb25281721d08780a3415f6f264057f46950100c360d86c7f596c3d1 not found: ID does not exist" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.993622 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.994691 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.994892 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vgpdt"] Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.995002 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.995259 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.995756 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:15:49 crc kubenswrapper[4892]: I0122 09:15:49.995906 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.000580 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.002460 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l"] Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.003313 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.006044 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.006403 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.008059 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.008383 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.008725 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.008964 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.014081 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl"] Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.025544 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l"] Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.031317 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtrx4"] Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.034102 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtrx4"] Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119349 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-serving-cert\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119674 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vg6\" (UniqueName: \"kubernetes.io/projected/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-kube-api-access-75vg6\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119737 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-config\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119793 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbq4\" (UniqueName: \"kubernetes.io/projected/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-kube-api-access-5sbq4\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119847 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-serving-cert\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119895 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-config\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.119950 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-client-ca\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.120003 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-client-ca\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.120032 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-proxy-ca-bundles\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.141789 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221399 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-config\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221467 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-client-ca\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221502 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-client-ca\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221523 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-proxy-ca-bundles\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221582 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-serving-cert\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221628 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vg6\" (UniqueName: \"kubernetes.io/projected/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-kube-api-access-75vg6\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221704 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-config\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221732 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbq4\" (UniqueName: \"kubernetes.io/projected/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-kube-api-access-5sbq4\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.221759 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-serving-cert\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.223913 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-client-ca\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.223967 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-client-ca\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.224846 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-config\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.224989 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-config\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.227498 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-serving-cert\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.227672 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-serving-cert\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.228008 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-proxy-ca-bundles\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.239497 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vg6\" (UniqueName: \"kubernetes.io/projected/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-kube-api-access-75vg6\") pod \"route-controller-manager-79dc877f5-nmh8l\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.239826 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbq4\" (UniqueName: \"kubernetes.io/projected/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-kube-api-access-5sbq4\") pod \"controller-manager-64fbcf6cfc-2jxpl\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.305068 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.318434 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.322532 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbgrb\" (UniqueName: \"kubernetes.io/projected/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-kube-api-access-fbgrb\") pod \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.322701 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-config-volume\") pod \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.322742 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-secret-volume\") pod \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\" (UID: \"f7608641-d2e1-4f1f-9fce-cbc081c61ce9\") " Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.323516 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7608641-d2e1-4f1f-9fce-cbc081c61ce9" (UID: "f7608641-d2e1-4f1f-9fce-cbc081c61ce9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.326353 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-kube-api-access-fbgrb" (OuterVolumeSpecName: "kube-api-access-fbgrb") pod "f7608641-d2e1-4f1f-9fce-cbc081c61ce9" (UID: "f7608641-d2e1-4f1f-9fce-cbc081c61ce9"). InnerVolumeSpecName "kube-api-access-fbgrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.328428 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7608641-d2e1-4f1f-9fce-cbc081c61ce9" (UID: "f7608641-d2e1-4f1f-9fce-cbc081c61ce9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.424227 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.424268 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.424311 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbgrb\" (UniqueName: \"kubernetes.io/projected/f7608641-d2e1-4f1f-9fce-cbc081c61ce9-kube-api-access-fbgrb\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.483037 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl"] Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.525108 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l"] Jan 22 09:15:50 crc kubenswrapper[4892]: W0122 09:15:50.541345 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdb9b020_e54a_4c17_a077_4bd4ccf6bbf7.slice/crio-e3ba119288d4c0400236678188600a8f4e4be9c62b5fd7315417dffbd6a55484 WatchSource:0}: Error finding container e3ba119288d4c0400236678188600a8f4e4be9c62b5fd7315417dffbd6a55484: Status 404 returned error can't find the container with id e3ba119288d4c0400236678188600a8f4e4be9c62b5fd7315417dffbd6a55484 Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.954382 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" event={"ID":"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7","Type":"ContainerStarted","Data":"543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b"} Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.954797 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" event={"ID":"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7","Type":"ContainerStarted","Data":"e3ba119288d4c0400236678188600a8f4e4be9c62b5fd7315417dffbd6a55484"} Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.955168 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.956562 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" event={"ID":"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf","Type":"ContainerStarted","Data":"e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047"} Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.956614 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" event={"ID":"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf","Type":"ContainerStarted","Data":"41aa8b55552407de077caf8679d8917b98617ec9f11542ac1f4b544ca892e0fb"} Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.957711 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.961601 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" event={"ID":"f7608641-d2e1-4f1f-9fce-cbc081c61ce9","Type":"ContainerDied","Data":"0298d8aa9cb203d7a0e7c47a37ffa5b358af714facbbd027a856174967172343"} Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.961700 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0298d8aa9cb203d7a0e7c47a37ffa5b358af714facbbd027a856174967172343" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.961748 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.961623 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.978868 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" podStartSLOduration=2.978846085 podStartE2EDuration="2.978846085s" podCreationTimestamp="2026-01-22 09:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:15:50.976062094 +0000 UTC m=+320.820141157" watchObservedRunningTime="2026-01-22 09:15:50.978846085 +0000 UTC m=+320.822925148" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.990625 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:15:50 crc kubenswrapper[4892]: I0122 09:15:50.993325 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" podStartSLOduration=2.993306905 podStartE2EDuration="2.993306905s" podCreationTimestamp="2026-01-22 09:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:15:50.99196158 +0000 UTC m=+320.836040643" watchObservedRunningTime="2026-01-22 09:15:50.993306905 +0000 UTC m=+320.837385968" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.285574 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" podUID="5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" containerName="registry" containerID="cri-o://75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6" gracePeriod=30 Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.425662 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97" path="/var/lib/kubelet/pods/97b4c9ea-5fd1-4006-a1e8-3ed3f0f37c97/volumes" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.426367 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91f44ce-a5d5-4379-a443-c61626f142f7" path="/var/lib/kubelet/pods/a91f44ce-a5d5-4379-a443-c61626f142f7/volumes" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.629256 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.749951 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-tls\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750058 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-certificates\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750119 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-bound-sa-token\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750186 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-trusted-ca\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750264 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvlht\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-kube-api-access-tvlht\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750395 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-installation-pull-secrets\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750431 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-ca-trust-extracted\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.750591 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\" (UID: \"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7\") " Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.751121 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.756424 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.757656 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-kube-api-access-tvlht" (OuterVolumeSpecName: "kube-api-access-tvlht") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "kube-api-access-tvlht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.758199 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.761420 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.761890 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.767440 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.768138 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" (UID: "5106e58c-1823-4cf9-8a5b-5fc9a001e8a7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852369 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvlht\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-kube-api-access-tvlht\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852415 4892 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852428 4892 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852440 4892 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852454 4892 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852466 4892 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.852476 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.971226 4892 generic.go:334] "Generic (PLEG): container finished" podID="5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" containerID="75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6" exitCode=0 Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.972574 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.974348 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" event={"ID":"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7","Type":"ContainerDied","Data":"75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6"} Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.974412 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vbm7b" event={"ID":"5106e58c-1823-4cf9-8a5b-5fc9a001e8a7","Type":"ContainerDied","Data":"d4a9a1b5bf58864af40df8bdad75f9c8aebeb12575d87dcc89d36a260a2db528"} Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.974450 4892 scope.go:117] "RemoveContainer" containerID="75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.991270 4892 scope.go:117] "RemoveContainer" containerID="75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6" Jan 22 09:15:51 crc kubenswrapper[4892]: E0122 09:15:51.993215 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6\": container with ID starting with 75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6 not found: ID does not exist" containerID="75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6" Jan 22 09:15:51 crc kubenswrapper[4892]: I0122 09:15:51.993357 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6"} err="failed to get container status \"75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6\": rpc error: code = NotFound desc = could not find container \"75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6\": container with ID starting with 75ec72c5e37d8221310e182162053d31550e78bc7888e7c50d1d9b0ad746aeb6 not found: ID does not exist" Jan 22 09:15:52 crc kubenswrapper[4892]: I0122 09:15:52.031204 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vbm7b"] Jan 22 09:15:52 crc kubenswrapper[4892]: I0122 09:15:52.037667 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vbm7b"] Jan 22 09:15:53 crc kubenswrapper[4892]: I0122 09:15:53.425182 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" path="/var/lib/kubelet/pods/5106e58c-1823-4cf9-8a5b-5fc9a001e8a7/volumes" Jan 22 09:15:54 crc kubenswrapper[4892]: I0122 09:15:54.696494 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.000587 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl"] Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.001492 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" podUID="e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" containerName="controller-manager" containerID="cri-o://e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047" gracePeriod=30 Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.020877 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l"] Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.021113 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" podUID="cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" containerName="route-controller-manager" containerID="cri-o://543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b" gracePeriod=30 Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.555972 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.561243 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716342 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-serving-cert\") pod \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716388 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sbq4\" (UniqueName: \"kubernetes.io/projected/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-kube-api-access-5sbq4\") pod \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716407 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-client-ca\") pod \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716428 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-config\") pod \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716447 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vg6\" (UniqueName: \"kubernetes.io/projected/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-kube-api-access-75vg6\") pod \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\" (UID: \"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716474 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-client-ca\") pod \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716496 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-serving-cert\") pod \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716534 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-config\") pod \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.716555 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-proxy-ca-bundles\") pod \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\" (UID: \"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf\") " Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.717559 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" (UID: "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.717603 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-config" (OuterVolumeSpecName: "config") pod "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" (UID: "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.717563 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-client-ca" (OuterVolumeSpecName: "client-ca") pod "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" (UID: "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.717936 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" (UID: "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.717964 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-config" (OuterVolumeSpecName: "config") pod "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" (UID: "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.721703 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-kube-api-access-75vg6" (OuterVolumeSpecName: "kube-api-access-75vg6") pod "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" (UID: "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7"). InnerVolumeSpecName "kube-api-access-75vg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.721704 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" (UID: "cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.721765 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-kube-api-access-5sbq4" (OuterVolumeSpecName: "kube-api-access-5sbq4") pod "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" (UID: "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf"). InnerVolumeSpecName "kube-api-access-5sbq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.721824 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" (UID: "e0fbecb5-2cec-47ff-bd33-c8fe66b865cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.817969 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818019 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sbq4\" (UniqueName: \"kubernetes.io/projected/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-kube-api-access-5sbq4\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818035 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818047 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818059 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818071 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vg6\" (UniqueName: \"kubernetes.io/projected/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7-kube-api-access-75vg6\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818082 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818093 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:05 crc kubenswrapper[4892]: I0122 09:16:05.818104 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.048822 4892 generic.go:334] "Generic (PLEG): container finished" podID="cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" containerID="543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b" exitCode=0 Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.048895 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.049001 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" event={"ID":"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7","Type":"ContainerDied","Data":"543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b"} Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.049237 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l" event={"ID":"cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7","Type":"ContainerDied","Data":"e3ba119288d4c0400236678188600a8f4e4be9c62b5fd7315417dffbd6a55484"} Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.049342 4892 scope.go:117] "RemoveContainer" containerID="543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.051684 4892 generic.go:334] "Generic (PLEG): container finished" podID="e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" containerID="e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047" exitCode=0 Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.051729 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" event={"ID":"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf","Type":"ContainerDied","Data":"e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047"} Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.051744 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" event={"ID":"e0fbecb5-2cec-47ff-bd33-c8fe66b865cf","Type":"ContainerDied","Data":"41aa8b55552407de077caf8679d8917b98617ec9f11542ac1f4b544ca892e0fb"} Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.051802 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.073409 4892 scope.go:117] "RemoveContainer" containerID="543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b" Jan 22 09:16:06 crc kubenswrapper[4892]: E0122 09:16:06.073809 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b\": container with ID starting with 543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b not found: ID does not exist" containerID="543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.073852 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b"} err="failed to get container status \"543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b\": rpc error: code = NotFound desc = could not find container \"543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b\": container with ID starting with 543218dbb118a4329bb2de2b207a261425dd78481d4244bbe82261b1ad48ef0b not found: ID does not exist" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.073879 4892 scope.go:117] "RemoveContainer" containerID="e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.084274 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l"] Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.091330 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79dc877f5-nmh8l"] Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.095158 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl"] Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.098075 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64fbcf6cfc-2jxpl"] Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.098364 4892 scope.go:117] "RemoveContainer" containerID="e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047" Jan 22 09:16:06 crc kubenswrapper[4892]: E0122 09:16:06.099505 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047\": container with ID starting with e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047 not found: ID does not exist" containerID="e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.099557 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047"} err="failed to get container status \"e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047\": rpc error: code = NotFound desc = could not find container \"e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047\": container with ID starting with e47af516d4d5b2953296a365472198ce1bc086ffa9679c876fc34bb59b478047 not found: ID does not exist" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.980390 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb"] Jan 22 09:16:06 crc kubenswrapper[4892]: E0122 09:16:06.981101 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7608641-d2e1-4f1f-9fce-cbc081c61ce9" containerName="collect-profiles" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.981496 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7608641-d2e1-4f1f-9fce-cbc081c61ce9" containerName="collect-profiles" Jan 22 09:16:06 crc kubenswrapper[4892]: E0122 09:16:06.981662 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" containerName="controller-manager" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.981841 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" containerName="controller-manager" Jan 22 09:16:06 crc kubenswrapper[4892]: E0122 09:16:06.982001 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" containerName="registry" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.982155 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" containerName="registry" Jan 22 09:16:06 crc kubenswrapper[4892]: E0122 09:16:06.982352 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" containerName="route-controller-manager" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.982512 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" containerName="route-controller-manager" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.982902 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" containerName="controller-manager" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.983100 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7608641-d2e1-4f1f-9fce-cbc081c61ce9" containerName="collect-profiles" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.983239 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5106e58c-1823-4cf9-8a5b-5fc9a001e8a7" containerName="registry" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.983422 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" containerName="route-controller-manager" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.984226 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.984336 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j"] Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.985954 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.987433 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.987934 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.988944 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.994466 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.994701 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.995141 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.995390 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.995585 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.995730 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.995964 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.995987 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:16:06 crc kubenswrapper[4892]: I0122 09:16:06.996262 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.000720 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.009962 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j"] Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.013567 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb"] Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131244 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575c904-f1d7-4b16-b512-8fcdbf63fc65-serving-cert\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131401 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da361da-f0e8-46c5-b637-f7ef4d57b801-serving-cert\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131446 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-config\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131483 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-proxy-ca-bundles\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131506 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-client-ca\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131531 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hxv\" (UniqueName: \"kubernetes.io/projected/c575c904-f1d7-4b16-b512-8fcdbf63fc65-kube-api-access-f6hxv\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131551 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-client-ca\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131576 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-config\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.131660 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpg6j\" (UniqueName: \"kubernetes.io/projected/3da361da-f0e8-46c5-b637-f7ef4d57b801-kube-api-access-kpg6j\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.232900 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da361da-f0e8-46c5-b637-f7ef4d57b801-serving-cert\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.232943 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-config\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.232970 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-proxy-ca-bundles\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.232989 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-client-ca\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.233009 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6hxv\" (UniqueName: \"kubernetes.io/projected/c575c904-f1d7-4b16-b512-8fcdbf63fc65-kube-api-access-f6hxv\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.233026 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-client-ca\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.233052 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-config\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.233079 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpg6j\" (UniqueName: \"kubernetes.io/projected/3da361da-f0e8-46c5-b637-f7ef4d57b801-kube-api-access-kpg6j\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.233102 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575c904-f1d7-4b16-b512-8fcdbf63fc65-serving-cert\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.234372 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-client-ca\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.234369 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-client-ca\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.234752 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-proxy-ca-bundles\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.235128 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-config\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.235434 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-config\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.239217 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575c904-f1d7-4b16-b512-8fcdbf63fc65-serving-cert\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.239686 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da361da-f0e8-46c5-b637-f7ef4d57b801-serving-cert\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.251115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6hxv\" (UniqueName: \"kubernetes.io/projected/c575c904-f1d7-4b16-b512-8fcdbf63fc65-kube-api-access-f6hxv\") pod \"route-controller-manager-6bc8df74b6-jz94j\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.252452 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpg6j\" (UniqueName: \"kubernetes.io/projected/3da361da-f0e8-46c5-b637-f7ef4d57b801-kube-api-access-kpg6j\") pod \"controller-manager-7f666bcdf6-ftxzb\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.352900 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.361686 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.430223 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7" path="/var/lib/kubelet/pods/cdb9b020-e54a-4c17-a077-4bd4ccf6bbf7/volumes" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.431003 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fbecb5-2cec-47ff-bd33-c8fe66b865cf" path="/var/lib/kubelet/pods/e0fbecb5-2cec-47ff-bd33-c8fe66b865cf/volumes" Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.597152 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j"] Jan 22 09:16:07 crc kubenswrapper[4892]: W0122 09:16:07.601152 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc575c904_f1d7_4b16_b512_8fcdbf63fc65.slice/crio-781f727da9c13b4720c820053113211d1d9b62ddd488f06d8da5a58bcf44cc9f WatchSource:0}: Error finding container 781f727da9c13b4720c820053113211d1d9b62ddd488f06d8da5a58bcf44cc9f: Status 404 returned error can't find the container with id 781f727da9c13b4720c820053113211d1d9b62ddd488f06d8da5a58bcf44cc9f Jan 22 09:16:07 crc kubenswrapper[4892]: I0122 09:16:07.756140 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb"] Jan 22 09:16:07 crc kubenswrapper[4892]: W0122 09:16:07.763136 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da361da_f0e8_46c5_b637_f7ef4d57b801.slice/crio-c07df04d6d960f15dd0af7e9cfc68274ed6c5ab06d6bcc325fd70322aa43269e WatchSource:0}: Error finding container c07df04d6d960f15dd0af7e9cfc68274ed6c5ab06d6bcc325fd70322aa43269e: Status 404 returned error can't find the container with id c07df04d6d960f15dd0af7e9cfc68274ed6c5ab06d6bcc325fd70322aa43269e Jan 22 09:16:08 crc kubenswrapper[4892]: I0122 09:16:08.070083 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" event={"ID":"c575c904-f1d7-4b16-b512-8fcdbf63fc65","Type":"ContainerStarted","Data":"781f727da9c13b4720c820053113211d1d9b62ddd488f06d8da5a58bcf44cc9f"} Jan 22 09:16:08 crc kubenswrapper[4892]: I0122 09:16:08.071018 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" event={"ID":"3da361da-f0e8-46c5-b637-f7ef4d57b801","Type":"ContainerStarted","Data":"c07df04d6d960f15dd0af7e9cfc68274ed6c5ab06d6bcc325fd70322aa43269e"} Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.079635 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" event={"ID":"c575c904-f1d7-4b16-b512-8fcdbf63fc65","Type":"ContainerStarted","Data":"bf5e16bbc87f72584eeb9b4c9958f6b042e18d141755acff484bb21cda62009d"} Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.081026 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.081150 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" event={"ID":"3da361da-f0e8-46c5-b637-f7ef4d57b801","Type":"ContainerStarted","Data":"0136fdad84a8b784a9461134389ef8f37283cc52c601b7f43abc3e1e785dcd5a"} Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.081315 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.086021 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.087013 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.121521 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" podStartSLOduration=4.121503622 podStartE2EDuration="4.121503622s" podCreationTimestamp="2026-01-22 09:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:16:09.099587548 +0000 UTC m=+338.943666611" watchObservedRunningTime="2026-01-22 09:16:09.121503622 +0000 UTC m=+338.965582685" Jan 22 09:16:09 crc kubenswrapper[4892]: I0122 09:16:09.122335 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" podStartSLOduration=4.122328123 podStartE2EDuration="4.122328123s" podCreationTimestamp="2026-01-22 09:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:16:09.12065609 +0000 UTC m=+338.964735153" watchObservedRunningTime="2026-01-22 09:16:09.122328123 +0000 UTC m=+338.966407186" Jan 22 09:16:28 crc kubenswrapper[4892]: I0122 09:16:28.828019 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb"] Jan 22 09:16:28 crc kubenswrapper[4892]: I0122 09:16:28.828792 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" podUID="3da361da-f0e8-46c5-b637-f7ef4d57b801" containerName="controller-manager" containerID="cri-o://0136fdad84a8b784a9461134389ef8f37283cc52c601b7f43abc3e1e785dcd5a" gracePeriod=30 Jan 22 09:16:28 crc kubenswrapper[4892]: I0122 09:16:28.847852 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j"] Jan 22 09:16:28 crc kubenswrapper[4892]: I0122 09:16:28.848094 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" podUID="c575c904-f1d7-4b16-b512-8fcdbf63fc65" containerName="route-controller-manager" containerID="cri-o://bf5e16bbc87f72584eeb9b4c9958f6b042e18d141755acff484bb21cda62009d" gracePeriod=30 Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.186796 4892 generic.go:334] "Generic (PLEG): container finished" podID="c575c904-f1d7-4b16-b512-8fcdbf63fc65" containerID="bf5e16bbc87f72584eeb9b4c9958f6b042e18d141755acff484bb21cda62009d" exitCode=0 Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.186894 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" event={"ID":"c575c904-f1d7-4b16-b512-8fcdbf63fc65","Type":"ContainerDied","Data":"bf5e16bbc87f72584eeb9b4c9958f6b042e18d141755acff484bb21cda62009d"} Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.189220 4892 generic.go:334] "Generic (PLEG): container finished" podID="3da361da-f0e8-46c5-b637-f7ef4d57b801" containerID="0136fdad84a8b784a9461134389ef8f37283cc52c601b7f43abc3e1e785dcd5a" exitCode=0 Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.189249 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" event={"ID":"3da361da-f0e8-46c5-b637-f7ef4d57b801","Type":"ContainerDied","Data":"0136fdad84a8b784a9461134389ef8f37283cc52c601b7f43abc3e1e785dcd5a"} Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.593995 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.725805 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-config\") pod \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.725844 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575c904-f1d7-4b16-b512-8fcdbf63fc65-serving-cert\") pod \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.725889 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6hxv\" (UniqueName: \"kubernetes.io/projected/c575c904-f1d7-4b16-b512-8fcdbf63fc65-kube-api-access-f6hxv\") pod \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.725953 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-client-ca\") pod \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\" (UID: \"c575c904-f1d7-4b16-b512-8fcdbf63fc65\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.726894 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-client-ca" (OuterVolumeSpecName: "client-ca") pod "c575c904-f1d7-4b16-b512-8fcdbf63fc65" (UID: "c575c904-f1d7-4b16-b512-8fcdbf63fc65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.727155 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-config" (OuterVolumeSpecName: "config") pod "c575c904-f1d7-4b16-b512-8fcdbf63fc65" (UID: "c575c904-f1d7-4b16-b512-8fcdbf63fc65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.731541 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c575c904-f1d7-4b16-b512-8fcdbf63fc65-kube-api-access-f6hxv" (OuterVolumeSpecName: "kube-api-access-f6hxv") pod "c575c904-f1d7-4b16-b512-8fcdbf63fc65" (UID: "c575c904-f1d7-4b16-b512-8fcdbf63fc65"). InnerVolumeSpecName "kube-api-access-f6hxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.731586 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c575c904-f1d7-4b16-b512-8fcdbf63fc65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c575c904-f1d7-4b16-b512-8fcdbf63fc65" (UID: "c575c904-f1d7-4b16-b512-8fcdbf63fc65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.759838 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.826528 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-config\") pod \"3da361da-f0e8-46c5-b637-f7ef4d57b801\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.826576 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpg6j\" (UniqueName: \"kubernetes.io/projected/3da361da-f0e8-46c5-b637-f7ef4d57b801-kube-api-access-kpg6j\") pod \"3da361da-f0e8-46c5-b637-f7ef4d57b801\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.826613 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da361da-f0e8-46c5-b637-f7ef4d57b801-serving-cert\") pod \"3da361da-f0e8-46c5-b637-f7ef4d57b801\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.826658 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-proxy-ca-bundles\") pod \"3da361da-f0e8-46c5-b637-f7ef4d57b801\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.826712 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-client-ca\") pod \"3da361da-f0e8-46c5-b637-f7ef4d57b801\" (UID: \"3da361da-f0e8-46c5-b637-f7ef4d57b801\") " Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827007 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827025 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575c904-f1d7-4b16-b512-8fcdbf63fc65-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827037 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6hxv\" (UniqueName: \"kubernetes.io/projected/c575c904-f1d7-4b16-b512-8fcdbf63fc65-kube-api-access-f6hxv\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827051 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c575c904-f1d7-4b16-b512-8fcdbf63fc65-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827607 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-client-ca" (OuterVolumeSpecName: "client-ca") pod "3da361da-f0e8-46c5-b637-f7ef4d57b801" (UID: "3da361da-f0e8-46c5-b637-f7ef4d57b801"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827648 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-config" (OuterVolumeSpecName: "config") pod "3da361da-f0e8-46c5-b637-f7ef4d57b801" (UID: "3da361da-f0e8-46c5-b637-f7ef4d57b801"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.827911 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3da361da-f0e8-46c5-b637-f7ef4d57b801" (UID: "3da361da-f0e8-46c5-b637-f7ef4d57b801"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.830151 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da361da-f0e8-46c5-b637-f7ef4d57b801-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3da361da-f0e8-46c5-b637-f7ef4d57b801" (UID: "3da361da-f0e8-46c5-b637-f7ef4d57b801"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.830209 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da361da-f0e8-46c5-b637-f7ef4d57b801-kube-api-access-kpg6j" (OuterVolumeSpecName: "kube-api-access-kpg6j") pod "3da361da-f0e8-46c5-b637-f7ef4d57b801" (UID: "3da361da-f0e8-46c5-b637-f7ef4d57b801"). InnerVolumeSpecName "kube-api-access-kpg6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.928565 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.928596 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpg6j\" (UniqueName: \"kubernetes.io/projected/3da361da-f0e8-46c5-b637-f7ef4d57b801-kube-api-access-kpg6j\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.928608 4892 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da361da-f0e8-46c5-b637-f7ef4d57b801-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.928621 4892 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.928631 4892 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da361da-f0e8-46c5-b637-f7ef4d57b801-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.996358 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58769bd6c6-ms2dp"] Jan 22 09:16:29 crc kubenswrapper[4892]: E0122 09:16:29.996695 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c575c904-f1d7-4b16-b512-8fcdbf63fc65" containerName="route-controller-manager" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.996720 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c575c904-f1d7-4b16-b512-8fcdbf63fc65" containerName="route-controller-manager" Jan 22 09:16:29 crc kubenswrapper[4892]: E0122 09:16:29.996742 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da361da-f0e8-46c5-b637-f7ef4d57b801" containerName="controller-manager" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.996752 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da361da-f0e8-46c5-b637-f7ef4d57b801" containerName="controller-manager" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.997424 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c575c904-f1d7-4b16-b512-8fcdbf63fc65" containerName="route-controller-manager" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.997463 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da361da-f0e8-46c5-b637-f7ef4d57b801" containerName="controller-manager" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.997933 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.998937 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc"] Jan 22 09:16:29 crc kubenswrapper[4892]: I0122 09:16:29.999613 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.006153 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.009400 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58769bd6c6-ms2dp"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030368 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-config\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030404 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-proxy-ca-bundles\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030438 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4de81e7-58f3-49c3-a745-d8d61eac4a75-config\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030464 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zgsh\" (UniqueName: \"kubernetes.io/projected/e4de81e7-58f3-49c3-a745-d8d61eac4a75-kube-api-access-4zgsh\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030562 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwz9\" (UniqueName: \"kubernetes.io/projected/cce4b927-42c3-4207-878b-3d8a95986613-kube-api-access-qrwz9\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030605 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4de81e7-58f3-49c3-a745-d8d61eac4a75-serving-cert\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030633 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-client-ca\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030680 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cce4b927-42c3-4207-878b-3d8a95986613-serving-cert\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.030707 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4de81e7-58f3-49c3-a745-d8d61eac4a75-client-ca\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.131963 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrwz9\" (UniqueName: \"kubernetes.io/projected/cce4b927-42c3-4207-878b-3d8a95986613-kube-api-access-qrwz9\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132020 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4de81e7-58f3-49c3-a745-d8d61eac4a75-serving-cert\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132042 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-client-ca\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132081 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cce4b927-42c3-4207-878b-3d8a95986613-serving-cert\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132104 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4de81e7-58f3-49c3-a745-d8d61eac4a75-client-ca\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132154 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-config\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132176 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-proxy-ca-bundles\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132215 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4de81e7-58f3-49c3-a745-d8d61eac4a75-config\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.132247 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zgsh\" (UniqueName: \"kubernetes.io/projected/e4de81e7-58f3-49c3-a745-d8d61eac4a75-kube-api-access-4zgsh\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.133354 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-client-ca\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.133418 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4de81e7-58f3-49c3-a745-d8d61eac4a75-client-ca\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.133780 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4de81e7-58f3-49c3-a745-d8d61eac4a75-config\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.133958 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-proxy-ca-bundles\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.135482 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cce4b927-42c3-4207-878b-3d8a95986613-config\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.137096 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4de81e7-58f3-49c3-a745-d8d61eac4a75-serving-cert\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.137095 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cce4b927-42c3-4207-878b-3d8a95986613-serving-cert\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.147956 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrwz9\" (UniqueName: \"kubernetes.io/projected/cce4b927-42c3-4207-878b-3d8a95986613-kube-api-access-qrwz9\") pod \"controller-manager-58769bd6c6-ms2dp\" (UID: \"cce4b927-42c3-4207-878b-3d8a95986613\") " pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.153748 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zgsh\" (UniqueName: \"kubernetes.io/projected/e4de81e7-58f3-49c3-a745-d8d61eac4a75-kube-api-access-4zgsh\") pod \"route-controller-manager-55f55bbbcb-sssmc\" (UID: \"e4de81e7-58f3-49c3-a745-d8d61eac4a75\") " pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.194371 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" event={"ID":"c575c904-f1d7-4b16-b512-8fcdbf63fc65","Type":"ContainerDied","Data":"781f727da9c13b4720c820053113211d1d9b62ddd488f06d8da5a58bcf44cc9f"} Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.194425 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.194433 4892 scope.go:117] "RemoveContainer" containerID="bf5e16bbc87f72584eeb9b4c9958f6b042e18d141755acff484bb21cda62009d" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.206909 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" event={"ID":"3da361da-f0e8-46c5-b637-f7ef4d57b801","Type":"ContainerDied","Data":"c07df04d6d960f15dd0af7e9cfc68274ed6c5ab06d6bcc325fd70322aa43269e"} Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.207161 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.230652 4892 scope.go:117] "RemoveContainer" containerID="0136fdad84a8b784a9461134389ef8f37283cc52c601b7f43abc3e1e785dcd5a" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.234463 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.237258 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-ftxzb"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.248161 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.250942 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc8df74b6-jz94j"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.338618 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.345392 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.745260 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc"] Jan 22 09:16:30 crc kubenswrapper[4892]: I0122 09:16:30.748962 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58769bd6c6-ms2dp"] Jan 22 09:16:30 crc kubenswrapper[4892]: W0122 09:16:30.756205 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcce4b927_42c3_4207_878b_3d8a95986613.slice/crio-2be4a211f6b7251ec4c49ab3793e82ba78df8b8a4af99a4ccc83ac5a32bf29c5 WatchSource:0}: Error finding container 2be4a211f6b7251ec4c49ab3793e82ba78df8b8a4af99a4ccc83ac5a32bf29c5: Status 404 returned error can't find the container with id 2be4a211f6b7251ec4c49ab3793e82ba78df8b8a4af99a4ccc83ac5a32bf29c5 Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.216557 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" event={"ID":"e4de81e7-58f3-49c3-a745-d8d61eac4a75","Type":"ContainerStarted","Data":"6a9a90c14e510e6aabdb670e2085a9563d4d201e5d7179c251b044aca5e551bd"} Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.216910 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.216926 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" event={"ID":"e4de81e7-58f3-49c3-a745-d8d61eac4a75","Type":"ContainerStarted","Data":"1b9e06bf99c42cac7e829e591ee61e3c71c5c5de01ef0eaf61c931f0718de751"} Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.218036 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" event={"ID":"cce4b927-42c3-4207-878b-3d8a95986613","Type":"ContainerStarted","Data":"21722aeed492721c6fca47e4a4fe091ea0b3942fc4a0469be45278356e880773"} Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.218097 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" event={"ID":"cce4b927-42c3-4207-878b-3d8a95986613","Type":"ContainerStarted","Data":"2be4a211f6b7251ec4c49ab3793e82ba78df8b8a4af99a4ccc83ac5a32bf29c5"} Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.260249 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" podStartSLOduration=3.260228431 podStartE2EDuration="3.260228431s" podCreationTimestamp="2026-01-22 09:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:16:31.255403389 +0000 UTC m=+361.099482452" watchObservedRunningTime="2026-01-22 09:16:31.260228431 +0000 UTC m=+361.104307494" Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.262335 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" podStartSLOduration=3.262323854 podStartE2EDuration="3.262323854s" podCreationTimestamp="2026-01-22 09:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:16:31.235571748 +0000 UTC m=+361.079650821" watchObservedRunningTime="2026-01-22 09:16:31.262323854 +0000 UTC m=+361.106402917" Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.405000 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55f55bbbcb-sssmc" Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.424642 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da361da-f0e8-46c5-b637-f7ef4d57b801" path="/var/lib/kubelet/pods/3da361da-f0e8-46c5-b637-f7ef4d57b801/volumes" Jan 22 09:16:31 crc kubenswrapper[4892]: I0122 09:16:31.425243 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c575c904-f1d7-4b16-b512-8fcdbf63fc65" path="/var/lib/kubelet/pods/c575c904-f1d7-4b16-b512-8fcdbf63fc65/volumes" Jan 22 09:16:32 crc kubenswrapper[4892]: I0122 09:16:32.224226 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:32 crc kubenswrapper[4892]: I0122 09:16:32.229462 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58769bd6c6-ms2dp" Jan 22 09:16:46 crc kubenswrapper[4892]: I0122 09:16:46.323424 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:16:46 crc kubenswrapper[4892]: I0122 09:16:46.324165 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:17:16 crc kubenswrapper[4892]: I0122 09:17:16.323205 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:17:16 crc kubenswrapper[4892]: I0122 09:17:16.323756 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.323990 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.324576 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.324620 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.325117 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7f0526153acdca2ca5f99af784bf184f41709f20620fb5551c5c6b34103a995"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.325161 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://a7f0526153acdca2ca5f99af784bf184f41709f20620fb5551c5c6b34103a995" gracePeriod=600 Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.682427 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="a7f0526153acdca2ca5f99af784bf184f41709f20620fb5551c5c6b34103a995" exitCode=0 Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.682523 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"a7f0526153acdca2ca5f99af784bf184f41709f20620fb5551c5c6b34103a995"} Jan 22 09:17:46 crc kubenswrapper[4892]: I0122 09:17:46.682711 4892 scope.go:117] "RemoveContainer" containerID="f3a3fb7887e93a63b627dc58e5e7d8ffee096bc9473701598d39daa2bbfad86a" Jan 22 09:17:48 crc kubenswrapper[4892]: I0122 09:17:48.696723 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"fd44fb84f1abd6068b0406af0dfd71eaeeb9adbf12f608ae3695759f64602a98"} Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.521204 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t"] Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.522486 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.525144 4892 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9p9rd" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.525975 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.526438 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.533522 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t"] Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.538778 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-t8w6r"] Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.539382 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-t8w6r" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.542128 4892 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-c5t4l" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.548571 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rv9dl"] Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.549359 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.552642 4892 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z969g" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.557587 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-t8w6r"] Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.597850 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rv9dl"] Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.660059 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfcqf\" (UniqueName: \"kubernetes.io/projected/5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0-kube-api-access-wfcqf\") pod \"cert-manager-858654f9db-t8w6r\" (UID: \"5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0\") " pod="cert-manager/cert-manager-858654f9db-t8w6r" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.660188 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7d8\" (UniqueName: \"kubernetes.io/projected/6035615e-d06d-45df-b927-9233155546ce-kube-api-access-db7d8\") pod \"cert-manager-cainjector-cf98fcc89-rzj7t\" (UID: \"6035615e-d06d-45df-b927-9233155546ce\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.660274 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7rr7\" (UniqueName: \"kubernetes.io/projected/fc56bdec-62b2-486e-84c5-363cc15c5cec-kube-api-access-h7rr7\") pod \"cert-manager-webhook-687f57d79b-rv9dl\" (UID: \"fc56bdec-62b2-486e-84c5-363cc15c5cec\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.762073 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db7d8\" (UniqueName: \"kubernetes.io/projected/6035615e-d06d-45df-b927-9233155546ce-kube-api-access-db7d8\") pod \"cert-manager-cainjector-cf98fcc89-rzj7t\" (UID: \"6035615e-d06d-45df-b927-9233155546ce\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.762159 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7rr7\" (UniqueName: \"kubernetes.io/projected/fc56bdec-62b2-486e-84c5-363cc15c5cec-kube-api-access-h7rr7\") pod \"cert-manager-webhook-687f57d79b-rv9dl\" (UID: \"fc56bdec-62b2-486e-84c5-363cc15c5cec\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.762236 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfcqf\" (UniqueName: \"kubernetes.io/projected/5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0-kube-api-access-wfcqf\") pod \"cert-manager-858654f9db-t8w6r\" (UID: \"5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0\") " pod="cert-manager/cert-manager-858654f9db-t8w6r" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.779977 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db7d8\" (UniqueName: \"kubernetes.io/projected/6035615e-d06d-45df-b927-9233155546ce-kube-api-access-db7d8\") pod \"cert-manager-cainjector-cf98fcc89-rzj7t\" (UID: \"6035615e-d06d-45df-b927-9233155546ce\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.781983 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7rr7\" (UniqueName: \"kubernetes.io/projected/fc56bdec-62b2-486e-84c5-363cc15c5cec-kube-api-access-h7rr7\") pod \"cert-manager-webhook-687f57d79b-rv9dl\" (UID: \"fc56bdec-62b2-486e-84c5-363cc15c5cec\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.787396 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfcqf\" (UniqueName: \"kubernetes.io/projected/5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0-kube-api-access-wfcqf\") pod \"cert-manager-858654f9db-t8w6r\" (UID: \"5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0\") " pod="cert-manager/cert-manager-858654f9db-t8w6r" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.862857 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.872471 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-t8w6r" Jan 22 09:20:12 crc kubenswrapper[4892]: I0122 09:20:12.879831 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.278792 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t"] Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.282385 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-t8w6r"] Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.284164 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.329884 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rv9dl"] Jan 22 09:20:13 crc kubenswrapper[4892]: W0122 09:20:13.335698 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc56bdec_62b2_486e_84c5_363cc15c5cec.slice/crio-ca7210baf1d579cc1ec41cec5b91b616ff3985aae41d4a6450e6b0faf95c9078 WatchSource:0}: Error finding container ca7210baf1d579cc1ec41cec5b91b616ff3985aae41d4a6450e6b0faf95c9078: Status 404 returned error can't find the container with id ca7210baf1d579cc1ec41cec5b91b616ff3985aae41d4a6450e6b0faf95c9078 Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.429311 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" event={"ID":"fc56bdec-62b2-486e-84c5-363cc15c5cec","Type":"ContainerStarted","Data":"ca7210baf1d579cc1ec41cec5b91b616ff3985aae41d4a6450e6b0faf95c9078"} Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.430595 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-t8w6r" event={"ID":"5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0","Type":"ContainerStarted","Data":"ca380fbb7cc17d67e9d8451192153b754b5f90ee2c0b9686e24a9f8536b83755"} Jan 22 09:20:13 crc kubenswrapper[4892]: I0122 09:20:13.431687 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" event={"ID":"6035615e-d06d-45df-b927-9233155546ce","Type":"ContainerStarted","Data":"528de20c7c6ae32d253a1344f972664d07ca94d7f861ad5902a0503d6380f245"} Jan 22 09:20:16 crc kubenswrapper[4892]: I0122 09:20:16.324053 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:20:16 crc kubenswrapper[4892]: I0122 09:20:16.324540 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.458844 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" event={"ID":"fc56bdec-62b2-486e-84c5-363cc15c5cec","Type":"ContainerStarted","Data":"4019f7da5b8f57d4c217cccc222b383d0ef790acece340a76d2234d7d27ce731"} Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.459183 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.460248 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-t8w6r" event={"ID":"5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0","Type":"ContainerStarted","Data":"9428cf17d8e1a5deebc441079a7020167d16e2055c08711f6c1d8697011116d5"} Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.462229 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" event={"ID":"6035615e-d06d-45df-b927-9233155546ce","Type":"ContainerStarted","Data":"5c3648e08f2e88c341e882c49577ff5654c5b7dbb8554e359e90eb720ee3992d"} Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.475276 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" podStartSLOduration=1.892204606 podStartE2EDuration="6.475255469s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="2026-01-22 09:20:13.337459469 +0000 UTC m=+583.181538532" lastFinishedPulling="2026-01-22 09:20:17.920510332 +0000 UTC m=+587.764589395" observedRunningTime="2026-01-22 09:20:18.472904322 +0000 UTC m=+588.316983405" watchObservedRunningTime="2026-01-22 09:20:18.475255469 +0000 UTC m=+588.319334542" Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.491754 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-t8w6r" podStartSLOduration=1.855920758 podStartE2EDuration="6.491736557s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="2026-01-22 09:20:13.284784565 +0000 UTC m=+583.128863628" lastFinishedPulling="2026-01-22 09:20:17.920600364 +0000 UTC m=+587.764679427" observedRunningTime="2026-01-22 09:20:18.488993181 +0000 UTC m=+588.333072274" watchObservedRunningTime="2026-01-22 09:20:18.491736557 +0000 UTC m=+588.335815620" Jan 22 09:20:18 crc kubenswrapper[4892]: I0122 09:20:18.513183 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rzj7t" podStartSLOduration=1.871443202 podStartE2EDuration="6.513161734s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="2026-01-22 09:20:13.283910044 +0000 UTC m=+583.127989107" lastFinishedPulling="2026-01-22 09:20:17.925628586 +0000 UTC m=+587.769707639" observedRunningTime="2026-01-22 09:20:18.50968757 +0000 UTC m=+588.353766633" watchObservedRunningTime="2026-01-22 09:20:18.513161734 +0000 UTC m=+588.357240797" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.158375 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-whb2h"] Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159448 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-controller" containerID="cri-o://dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159522 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="northd" containerID="cri-o://5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159603 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-node" containerID="cri-o://dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159663 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-acl-logging" containerID="cri-o://168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159588 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159848 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="sbdb" containerID="cri-o://86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.159874 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="nbdb" containerID="cri-o://be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.197568 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" containerID="cri-o://dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" gracePeriod=30 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.452046 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/3.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.454262 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovn-acl-logging/0.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.454701 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovn-controller/0.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.455064 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.481983 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/2.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.486453 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/1.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.486502 4892 generic.go:334] "Generic (PLEG): container finished" podID="80ef00cc-97bb-4f08-ba72-3947ab29043f" containerID="497bfee3be201ad7f5a2f636b9a63fec67e338fd03270d1e48260b051c0ddd34" exitCode=2 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.486590 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerDied","Data":"497bfee3be201ad7f5a2f636b9a63fec67e338fd03270d1e48260b051c0ddd34"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.486630 4892 scope.go:117] "RemoveContainer" containerID="d5dad65f61c4cb1bb2ceae159bb0447f72fadddb091f462882b14569cfc70bde" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.490721 4892 scope.go:117] "RemoveContainer" containerID="497bfee3be201ad7f5a2f636b9a63fec67e338fd03270d1e48260b051c0ddd34" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.490937 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hz9vn_openshift-multus(80ef00cc-97bb-4f08-ba72-3947ab29043f)\"" pod="openshift-multus/multus-hz9vn" podUID="80ef00cc-97bb-4f08-ba72-3947ab29043f" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.492389 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovnkube-controller/3.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.499485 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovn-acl-logging/0.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.499889 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-whb2h_a93623e9-3eab-47bb-b94a-5b962f3eb203/ovn-controller/0.log" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500163 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" exitCode=0 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500180 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" exitCode=0 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500187 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" exitCode=0 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500194 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" exitCode=0 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500201 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" exitCode=0 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500206 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" exitCode=0 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500213 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" exitCode=143 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500219 4892 generic.go:334] "Generic (PLEG): container finished" podID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" exitCode=143 Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500235 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500259 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500269 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500293 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500302 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500311 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500320 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500329 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500335 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500340 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500346 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500350 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500356 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500360 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500365 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500370 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500376 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500383 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500389 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500395 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500401 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500408 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500416 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500423 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500430 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500436 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500442 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500451 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500461 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500468 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500475 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500482 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500488 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500496 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500502 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500509 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500516 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500523 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500532 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" event={"ID":"a93623e9-3eab-47bb-b94a-5b962f3eb203","Type":"ContainerDied","Data":"5cb0868bf5953d4b5ac00c2b59132114cba630d48d879e15bcd646c0821dd213"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500542 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500549 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500556 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500563 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500570 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500578 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500585 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500592 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500600 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500607 4892 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.500711 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-whb2h" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519050 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gms7s"] Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519255 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519267 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519303 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519313 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519324 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-acl-logging" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519330 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-acl-logging" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519339 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519345 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519355 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-node" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519361 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-node" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519371 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519377 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519384 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519389 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519399 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519404 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519414 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="sbdb" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519419 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="sbdb" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519444 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="nbdb" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519455 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="nbdb" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519474 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="northd" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519482 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="northd" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519499 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kubecfg-setup" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519506 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kubecfg-setup" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519601 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-node" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519612 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519618 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="nbdb" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519625 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519632 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="sbdb" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519638 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519647 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519653 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519661 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovn-acl-logging" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519670 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519678 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="northd" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.519763 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519771 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.519845 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" containerName="ovnkube-controller" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.521371 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.522680 4892 scope.go:117] "RemoveContainer" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.540296 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.560253 4892 scope.go:117] "RemoveContainer" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.571786 4892 scope.go:117] "RemoveContainer" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.584661 4892 scope.go:117] "RemoveContainer" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.595123 4892 scope.go:117] "RemoveContainer" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599460 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599545 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-netns\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599576 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-script-lib\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599598 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-log-socket\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599616 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-systemd-units\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599633 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-netd\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599631 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599662 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-ovn\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599686 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-kubelet\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599698 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599711 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-config\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599717 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599728 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599723 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599756 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599756 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-ovn-kubernetes\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599670 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-log-socket" (OuterVolumeSpecName: "log-socket") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599831 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-node-log\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599852 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-bin\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599877 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-var-lib-openvswitch\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599881 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599913 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599919 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-openvswitch\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599945 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599962 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-slash\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599973 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599988 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-etc-openvswitch\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.599998 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-slash" (OuterVolumeSpecName: "host-slash") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600009 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-env-overrides\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600020 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600026 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvw6q\" (UniqueName: \"kubernetes.io/projected/a93623e9-3eab-47bb-b94a-5b962f3eb203-kube-api-access-cvw6q\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600047 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovn-node-metrics-cert\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600067 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-systemd\") pod \"a93623e9-3eab-47bb-b94a-5b962f3eb203\" (UID: \"a93623e9-3eab-47bb-b94a-5b962f3eb203\") " Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600256 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-var-lib-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600275 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovnkube-config\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600436 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovn-node-metrics-cert\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600479 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-cni-netd\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600495 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-run-ovn-kubernetes\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600515 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-systemd-units\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600528 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600563 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-systemd\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600587 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-ovn\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600617 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovnkube-script-lib\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600644 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cmz\" (UniqueName: \"kubernetes.io/projected/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-kube-api-access-72cmz\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600918 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-etc-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600956 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-slash\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600982 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-kubelet\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601017 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601042 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-env-overrides\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601081 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-node-log\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601170 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-run-netns\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601195 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-log-socket\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601243 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-cni-bin\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601311 4892 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601326 4892 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601337 4892 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601348 4892 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601358 4892 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601368 4892 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601379 4892 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601391 4892 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601401 4892 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601412 4892 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601423 4892 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601434 4892 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601445 4892 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600274 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.600395 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601595 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.601764 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-node-log" (OuterVolumeSpecName: "node-log") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.605478 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93623e9-3eab-47bb-b94a-5b962f3eb203-kube-api-access-cvw6q" (OuterVolumeSpecName: "kube-api-access-cvw6q") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "kube-api-access-cvw6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.605666 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.609890 4892 scope.go:117] "RemoveContainer" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.614000 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a93623e9-3eab-47bb-b94a-5b962f3eb203" (UID: "a93623e9-3eab-47bb-b94a-5b962f3eb203"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.620187 4892 scope.go:117] "RemoveContainer" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.630224 4892 scope.go:117] "RemoveContainer" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.641498 4892 scope.go:117] "RemoveContainer" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.653560 4892 scope.go:117] "RemoveContainer" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.653927 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": container with ID starting with dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8 not found: ID does not exist" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.653967 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} err="failed to get container status \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": rpc error: code = NotFound desc = could not find container \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": container with ID starting with dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.653991 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.654266 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": container with ID starting with 0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c not found: ID does not exist" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.654312 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} err="failed to get container status \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": rpc error: code = NotFound desc = could not find container \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": container with ID starting with 0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.654334 4892 scope.go:117] "RemoveContainer" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.654607 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": container with ID starting with 86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be not found: ID does not exist" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.654640 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} err="failed to get container status \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": rpc error: code = NotFound desc = could not find container \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": container with ID starting with 86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.654658 4892 scope.go:117] "RemoveContainer" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.654865 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": container with ID starting with be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c not found: ID does not exist" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.654888 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} err="failed to get container status \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": rpc error: code = NotFound desc = could not find container \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": container with ID starting with be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.654903 4892 scope.go:117] "RemoveContainer" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.655149 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": container with ID starting with 5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855 not found: ID does not exist" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.655173 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} err="failed to get container status \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": rpc error: code = NotFound desc = could not find container \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": container with ID starting with 5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.655184 4892 scope.go:117] "RemoveContainer" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.655440 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": container with ID starting with e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be not found: ID does not exist" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.655471 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} err="failed to get container status \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": rpc error: code = NotFound desc = could not find container \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": container with ID starting with e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.655485 4892 scope.go:117] "RemoveContainer" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.655800 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": container with ID starting with dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0 not found: ID does not exist" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.655825 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} err="failed to get container status \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": rpc error: code = NotFound desc = could not find container \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": container with ID starting with dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.655843 4892 scope.go:117] "RemoveContainer" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.656023 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": container with ID starting with 168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9 not found: ID does not exist" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.656048 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} err="failed to get container status \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": rpc error: code = NotFound desc = could not find container \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": container with ID starting with 168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.656062 4892 scope.go:117] "RemoveContainer" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.656595 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": container with ID starting with dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237 not found: ID does not exist" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.656616 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} err="failed to get container status \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": rpc error: code = NotFound desc = could not find container \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": container with ID starting with dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.656632 4892 scope.go:117] "RemoveContainer" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" Jan 22 09:20:22 crc kubenswrapper[4892]: E0122 09:20:22.656845 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": container with ID starting with a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1 not found: ID does not exist" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.656865 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} err="failed to get container status \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": rpc error: code = NotFound desc = could not find container \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": container with ID starting with a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.656880 4892 scope.go:117] "RemoveContainer" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657053 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} err="failed to get container status \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": rpc error: code = NotFound desc = could not find container \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": container with ID starting with dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657072 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657243 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} err="failed to get container status \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": rpc error: code = NotFound desc = could not find container \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": container with ID starting with 0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657266 4892 scope.go:117] "RemoveContainer" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657569 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} err="failed to get container status \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": rpc error: code = NotFound desc = could not find container \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": container with ID starting with 86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657593 4892 scope.go:117] "RemoveContainer" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657974 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} err="failed to get container status \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": rpc error: code = NotFound desc = could not find container \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": container with ID starting with be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.657994 4892 scope.go:117] "RemoveContainer" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658193 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} err="failed to get container status \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": rpc error: code = NotFound desc = could not find container \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": container with ID starting with 5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658211 4892 scope.go:117] "RemoveContainer" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658516 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} err="failed to get container status \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": rpc error: code = NotFound desc = could not find container \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": container with ID starting with e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658532 4892 scope.go:117] "RemoveContainer" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658729 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} err="failed to get container status \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": rpc error: code = NotFound desc = could not find container \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": container with ID starting with dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658747 4892 scope.go:117] "RemoveContainer" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658913 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} err="failed to get container status \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": rpc error: code = NotFound desc = could not find container \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": container with ID starting with 168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.658936 4892 scope.go:117] "RemoveContainer" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659123 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} err="failed to get container status \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": rpc error: code = NotFound desc = could not find container \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": container with ID starting with dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659150 4892 scope.go:117] "RemoveContainer" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659449 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} err="failed to get container status \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": rpc error: code = NotFound desc = could not find container \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": container with ID starting with a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659469 4892 scope.go:117] "RemoveContainer" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659748 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} err="failed to get container status \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": rpc error: code = NotFound desc = could not find container \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": container with ID starting with dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659766 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.659996 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} err="failed to get container status \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": rpc error: code = NotFound desc = could not find container \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": container with ID starting with 0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660014 4892 scope.go:117] "RemoveContainer" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660198 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} err="failed to get container status \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": rpc error: code = NotFound desc = could not find container \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": container with ID starting with 86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660226 4892 scope.go:117] "RemoveContainer" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660503 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} err="failed to get container status \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": rpc error: code = NotFound desc = could not find container \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": container with ID starting with be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660527 4892 scope.go:117] "RemoveContainer" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660688 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} err="failed to get container status \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": rpc error: code = NotFound desc = could not find container \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": container with ID starting with 5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660711 4892 scope.go:117] "RemoveContainer" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660945 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} err="failed to get container status \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": rpc error: code = NotFound desc = could not find container \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": container with ID starting with e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.660968 4892 scope.go:117] "RemoveContainer" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.661372 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} err="failed to get container status \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": rpc error: code = NotFound desc = could not find container \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": container with ID starting with dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.661394 4892 scope.go:117] "RemoveContainer" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.661700 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} err="failed to get container status \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": rpc error: code = NotFound desc = could not find container \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": container with ID starting with 168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.661719 4892 scope.go:117] "RemoveContainer" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.661964 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} err="failed to get container status \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": rpc error: code = NotFound desc = could not find container \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": container with ID starting with dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.661984 4892 scope.go:117] "RemoveContainer" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.662310 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} err="failed to get container status \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": rpc error: code = NotFound desc = could not find container \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": container with ID starting with a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.662331 4892 scope.go:117] "RemoveContainer" containerID="dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.662611 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8"} err="failed to get container status \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": rpc error: code = NotFound desc = could not find container \"dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8\": container with ID starting with dc60364cbc1956f7f7aa8ad95b265cee21a47c96458b5a7f0f0ad31683c645f8 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.662638 4892 scope.go:117] "RemoveContainer" containerID="0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.663016 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c"} err="failed to get container status \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": rpc error: code = NotFound desc = could not find container \"0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c\": container with ID starting with 0180d745bab656f0ab23e06ec15063d740f723d6e3c50fc209b203e057417f2c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.663034 4892 scope.go:117] "RemoveContainer" containerID="86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.663278 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be"} err="failed to get container status \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": rpc error: code = NotFound desc = could not find container \"86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be\": container with ID starting with 86c743919fc33792e7a31e4ad09c85bc4c020be180d23f1dfd37c130a45503be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.663308 4892 scope.go:117] "RemoveContainer" containerID="be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.665275 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c"} err="failed to get container status \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": rpc error: code = NotFound desc = could not find container \"be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c\": container with ID starting with be19facb72eae8993913601a1905ebfcdb07d1543d7bef7dd25e64428e0bb99c not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.665442 4892 scope.go:117] "RemoveContainer" containerID="5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.665687 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855"} err="failed to get container status \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": rpc error: code = NotFound desc = could not find container \"5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855\": container with ID starting with 5aa2a220cc844a904f9df6f476dac6e7ccd641953da5fde88895815146e9a855 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.665707 4892 scope.go:117] "RemoveContainer" containerID="e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.665891 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be"} err="failed to get container status \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": rpc error: code = NotFound desc = could not find container \"e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be\": container with ID starting with e1b33c7fcf1f0172345de17557a05853410bc41c895c6305f7f2f3f5f16ea5be not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.665914 4892 scope.go:117] "RemoveContainer" containerID="dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666127 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0"} err="failed to get container status \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": rpc error: code = NotFound desc = could not find container \"dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0\": container with ID starting with dc803c6273306d7b0e7e763c4ee612a5d9b819750c49a1058cd7fb528f11afb0 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666155 4892 scope.go:117] "RemoveContainer" containerID="168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666361 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9"} err="failed to get container status \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": rpc error: code = NotFound desc = could not find container \"168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9\": container with ID starting with 168c03f776ee56c1f4ebea296fd2131afc99d4fcb38b401fc897c3418b22f6d9 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666379 4892 scope.go:117] "RemoveContainer" containerID="dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666649 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237"} err="failed to get container status \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": rpc error: code = NotFound desc = could not find container \"dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237\": container with ID starting with dbfbfe04c33d04cfd22c44ac39c76ccee4c6966caf0ce6e15b7b0a3580904237 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666665 4892 scope.go:117] "RemoveContainer" containerID="a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.666981 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1"} err="failed to get container status \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": rpc error: code = NotFound desc = could not find container \"a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1\": container with ID starting with a5fd82b06c87980223f0fa9e9e962b81da3e6b75528ac7ee3498b9a3161caea1 not found: ID does not exist" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.702862 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.702907 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-systemd-units\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.702937 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-systemd\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.702962 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-ovn\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.702992 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovnkube-script-lib\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703020 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cmz\" (UniqueName: \"kubernetes.io/projected/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-kube-api-access-72cmz\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703033 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-systemd-units\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703042 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-etc-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703069 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-etc-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703084 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-systemd\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703097 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-slash\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703121 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-slash\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703105 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-ovn\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703254 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-run-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703259 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-kubelet\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703278 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-kubelet\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703430 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703463 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-env-overrides\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703489 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-node-log\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703552 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-run-netns\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703574 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-log-socket\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703601 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-cni-bin\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703631 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-var-lib-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703653 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovnkube-config\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703659 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-node-log\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703676 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovn-node-metrics-cert\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703688 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703705 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-cni-netd\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703730 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-run-ovn-kubernetes\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703774 4892 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703787 4892 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703800 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvw6q\" (UniqueName: \"kubernetes.io/projected/a93623e9-3eab-47bb-b94a-5b962f3eb203-kube-api-access-cvw6q\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703813 4892 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703825 4892 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a93623e9-3eab-47bb-b94a-5b962f3eb203-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703836 4892 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703847 4892 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93623e9-3eab-47bb-b94a-5b962f3eb203-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703856 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovnkube-script-lib\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703877 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-run-ovn-kubernetes\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703904 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-var-lib-openvswitch\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703935 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-run-netns\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703963 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-log-socket\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.703989 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-cni-bin\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.704145 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-env-overrides\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.704186 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-host-cni-netd\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.704689 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovnkube-config\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.708767 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-ovn-node-metrics-cert\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.718540 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cmz\" (UniqueName: \"kubernetes.io/projected/6aa210e0-ad8e-40c8-a33a-6a7781b44fc2-kube-api-access-72cmz\") pod \"ovnkube-node-gms7s\" (UID: \"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2\") " pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.827605 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-whb2h"] Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.831104 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-whb2h"] Jan 22 09:20:22 crc kubenswrapper[4892]: I0122 09:20:22.836479 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:22 crc kubenswrapper[4892]: W0122 09:20:22.857351 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aa210e0_ad8e_40c8_a33a_6a7781b44fc2.slice/crio-e9e4ce67c14f97d294a0dacfc43065a2fbb46410e169d461a4f8191de7bfdb5f WatchSource:0}: Error finding container e9e4ce67c14f97d294a0dacfc43065a2fbb46410e169d461a4f8191de7bfdb5f: Status 404 returned error can't find the container with id e9e4ce67c14f97d294a0dacfc43065a2fbb46410e169d461a4f8191de7bfdb5f Jan 22 09:20:23 crc kubenswrapper[4892]: I0122 09:20:23.424446 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93623e9-3eab-47bb-b94a-5b962f3eb203" path="/var/lib/kubelet/pods/a93623e9-3eab-47bb-b94a-5b962f3eb203/volumes" Jan 22 09:20:23 crc kubenswrapper[4892]: I0122 09:20:23.507932 4892 generic.go:334] "Generic (PLEG): container finished" podID="6aa210e0-ad8e-40c8-a33a-6a7781b44fc2" containerID="977832d57d61188379e9e1808186f26d252cc08e1af00b26a4e5e61c412ee17c" exitCode=0 Jan 22 09:20:23 crc kubenswrapper[4892]: I0122 09:20:23.508098 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerDied","Data":"977832d57d61188379e9e1808186f26d252cc08e1af00b26a4e5e61c412ee17c"} Jan 22 09:20:23 crc kubenswrapper[4892]: I0122 09:20:23.508150 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"e9e4ce67c14f97d294a0dacfc43065a2fbb46410e169d461a4f8191de7bfdb5f"} Jan 22 09:20:23 crc kubenswrapper[4892]: I0122 09:20:23.511009 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/2.log" Jan 22 09:20:24 crc kubenswrapper[4892]: I0122 09:20:24.525603 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"0c4bfa9cdb2fa25b79b04199d780d8e328162fb65e2b087a2bc5863b45483fcc"} Jan 22 09:20:24 crc kubenswrapper[4892]: I0122 09:20:24.527031 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"114b5265cb13762bed690b569e6b11d7811a9d5aaa5d703cde63335a749280fa"} Jan 22 09:20:24 crc kubenswrapper[4892]: I0122 09:20:24.527103 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"bc52a8c29b998556f9ef4be8903f75dd131693c3aa8ca26b43e639402accd8db"} Jan 22 09:20:24 crc kubenswrapper[4892]: I0122 09:20:24.527159 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"2bedbccbf78377e5ce2782fbcd5d1df9a27e7a375edb0bb6ba75cb7be8d3d143"} Jan 22 09:20:24 crc kubenswrapper[4892]: I0122 09:20:24.527244 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"7fdb028af5554dee16a51974090b163b2f293f75c58a549d40fe84f712f1dbb6"} Jan 22 09:20:24 crc kubenswrapper[4892]: I0122 09:20:24.527331 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"1df08e441af739654a4e0142fcfa2dde2570f56234a0c42b1ebf8ba0d5f6d977"} Jan 22 09:20:26 crc kubenswrapper[4892]: I0122 09:20:26.537955 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"5cc91286f450b03b53d262b3ad90fe24798db4e9320c99e8a1da70e23f6101c6"} Jan 22 09:20:27 crc kubenswrapper[4892]: I0122 09:20:27.882469 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rv9dl" Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.552822 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" event={"ID":"6aa210e0-ad8e-40c8-a33a-6a7781b44fc2","Type":"ContainerStarted","Data":"dff68ecded5316d71409a2c3236215b091372395cf3b04eb5143fd694572109c"} Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.553216 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.553344 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.553421 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.594666 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.607267 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:20:28 crc kubenswrapper[4892]: I0122 09:20:28.629628 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" podStartSLOduration=6.629597533 podStartE2EDuration="6.629597533s" podCreationTimestamp="2026-01-22 09:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:28.589403833 +0000 UTC m=+598.433482896" watchObservedRunningTime="2026-01-22 09:20:28.629597533 +0000 UTC m=+598.473676606" Jan 22 09:20:33 crc kubenswrapper[4892]: I0122 09:20:33.418888 4892 scope.go:117] "RemoveContainer" containerID="497bfee3be201ad7f5a2f636b9a63fec67e338fd03270d1e48260b051c0ddd34" Jan 22 09:20:33 crc kubenswrapper[4892]: E0122 09:20:33.419909 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hz9vn_openshift-multus(80ef00cc-97bb-4f08-ba72-3947ab29043f)\"" pod="openshift-multus/multus-hz9vn" podUID="80ef00cc-97bb-4f08-ba72-3947ab29043f" Jan 22 09:20:46 crc kubenswrapper[4892]: I0122 09:20:46.323889 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:20:46 crc kubenswrapper[4892]: I0122 09:20:46.324584 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:20:47 crc kubenswrapper[4892]: I0122 09:20:47.418851 4892 scope.go:117] "RemoveContainer" containerID="497bfee3be201ad7f5a2f636b9a63fec67e338fd03270d1e48260b051c0ddd34" Jan 22 09:20:48 crc kubenswrapper[4892]: I0122 09:20:48.659264 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/2.log" Jan 22 09:20:48 crc kubenswrapper[4892]: I0122 09:20:48.659620 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hz9vn" event={"ID":"80ef00cc-97bb-4f08-ba72-3947ab29043f","Type":"ContainerStarted","Data":"9d4ceeadf3a1a80fdb21be3190a3e8594371de29dff156f0926414db1cf940cc"} Jan 22 09:20:52 crc kubenswrapper[4892]: I0122 09:20:52.869981 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gms7s" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.267227 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn"] Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.269170 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.274784 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.282903 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn"] Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.396757 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.396835 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hc2\" (UniqueName: \"kubernetes.io/projected/8c2da807-7b14-4384-bf1a-dcfad84a6a14-kube-api-access-95hc2\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.396964 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.498271 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hc2\" (UniqueName: \"kubernetes.io/projected/8c2da807-7b14-4384-bf1a-dcfad84a6a14-kube-api-access-95hc2\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.498349 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.498422 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.498941 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.498987 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.518456 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hc2\" (UniqueName: \"kubernetes.io/projected/8c2da807-7b14-4384-bf1a-dcfad84a6a14-kube-api-access-95hc2\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.589964 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.766707 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn"] Jan 22 09:21:11 crc kubenswrapper[4892]: W0122 09:21:11.774536 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2da807_7b14_4384_bf1a_dcfad84a6a14.slice/crio-b4a69abdf5774bfa5cb41a1ff4fa057d1b070e6bdfe524d07b11d9a0eacc410d WatchSource:0}: Error finding container b4a69abdf5774bfa5cb41a1ff4fa057d1b070e6bdfe524d07b11d9a0eacc410d: Status 404 returned error can't find the container with id b4a69abdf5774bfa5cb41a1ff4fa057d1b070e6bdfe524d07b11d9a0eacc410d Jan 22 09:21:11 crc kubenswrapper[4892]: I0122 09:21:11.818098 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" event={"ID":"8c2da807-7b14-4384-bf1a-dcfad84a6a14","Type":"ContainerStarted","Data":"b4a69abdf5774bfa5cb41a1ff4fa057d1b070e6bdfe524d07b11d9a0eacc410d"} Jan 22 09:21:12 crc kubenswrapper[4892]: I0122 09:21:12.826670 4892 generic.go:334] "Generic (PLEG): container finished" podID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerID="fe7dbd6c14975e9220ec4256e526c8957a5a0d003db93406cd0754c48c5b33e3" exitCode=0 Jan 22 09:21:12 crc kubenswrapper[4892]: I0122 09:21:12.826785 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" event={"ID":"8c2da807-7b14-4384-bf1a-dcfad84a6a14","Type":"ContainerDied","Data":"fe7dbd6c14975e9220ec4256e526c8957a5a0d003db93406cd0754c48c5b33e3"} Jan 22 09:21:15 crc kubenswrapper[4892]: I0122 09:21:15.845691 4892 generic.go:334] "Generic (PLEG): container finished" podID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerID="fb6a10904fe663c6cf87feadb82eb366025eda64d90ddc4071b68c55b32cbb9e" exitCode=0 Jan 22 09:21:15 crc kubenswrapper[4892]: I0122 09:21:15.845793 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" event={"ID":"8c2da807-7b14-4384-bf1a-dcfad84a6a14","Type":"ContainerDied","Data":"fb6a10904fe663c6cf87feadb82eb366025eda64d90ddc4071b68c55b32cbb9e"} Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.323579 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.323715 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.323777 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.324608 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd44fb84f1abd6068b0406af0dfd71eaeeb9adbf12f608ae3695759f64602a98"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.324710 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://fd44fb84f1abd6068b0406af0dfd71eaeeb9adbf12f608ae3695759f64602a98" gracePeriod=600 Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.854205 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="fd44fb84f1abd6068b0406af0dfd71eaeeb9adbf12f608ae3695759f64602a98" exitCode=0 Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.854269 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"fd44fb84f1abd6068b0406af0dfd71eaeeb9adbf12f608ae3695759f64602a98"} Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.854636 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"f3241bed9938434158615102d7fd185345d457bb0f2990573e82de1469f205ee"} Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.854671 4892 scope.go:117] "RemoveContainer" containerID="a7f0526153acdca2ca5f99af784bf184f41709f20620fb5551c5c6b34103a995" Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.874541 4892 generic.go:334] "Generic (PLEG): container finished" podID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerID="b51e6d2f43aafe7245d0211bf7d8ea64e64c32545ff6c82554fc42f88fb87d12" exitCode=0 Jan 22 09:21:16 crc kubenswrapper[4892]: I0122 09:21:16.874589 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" event={"ID":"8c2da807-7b14-4384-bf1a-dcfad84a6a14","Type":"ContainerDied","Data":"b51e6d2f43aafe7245d0211bf7d8ea64e64c32545ff6c82554fc42f88fb87d12"} Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.199810 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.284717 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-bundle\") pod \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.284789 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95hc2\" (UniqueName: \"kubernetes.io/projected/8c2da807-7b14-4384-bf1a-dcfad84a6a14-kube-api-access-95hc2\") pod \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.284859 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-util\") pod \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\" (UID: \"8c2da807-7b14-4384-bf1a-dcfad84a6a14\") " Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.286221 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-bundle" (OuterVolumeSpecName: "bundle") pod "8c2da807-7b14-4384-bf1a-dcfad84a6a14" (UID: "8c2da807-7b14-4384-bf1a-dcfad84a6a14"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.296566 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2da807-7b14-4384-bf1a-dcfad84a6a14-kube-api-access-95hc2" (OuterVolumeSpecName: "kube-api-access-95hc2") pod "8c2da807-7b14-4384-bf1a-dcfad84a6a14" (UID: "8c2da807-7b14-4384-bf1a-dcfad84a6a14"). InnerVolumeSpecName "kube-api-access-95hc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.296799 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-util" (OuterVolumeSpecName: "util") pod "8c2da807-7b14-4384-bf1a-dcfad84a6a14" (UID: "8c2da807-7b14-4384-bf1a-dcfad84a6a14"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.386014 4892 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.386047 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95hc2\" (UniqueName: \"kubernetes.io/projected/8c2da807-7b14-4384-bf1a-dcfad84a6a14-kube-api-access-95hc2\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.386060 4892 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c2da807-7b14-4384-bf1a-dcfad84a6a14-util\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.889042 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" event={"ID":"8c2da807-7b14-4384-bf1a-dcfad84a6a14","Type":"ContainerDied","Data":"b4a69abdf5774bfa5cb41a1ff4fa057d1b070e6bdfe524d07b11d9a0eacc410d"} Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.889091 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a69abdf5774bfa5cb41a1ff4fa057d1b070e6bdfe524d07b11d9a0eacc410d" Jan 22 09:21:18 crc kubenswrapper[4892]: I0122 09:21:18.889549 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.344800 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-blljf"] Jan 22 09:21:20 crc kubenswrapper[4892]: E0122 09:21:20.345375 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="pull" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.345392 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="pull" Jan 22 09:21:20 crc kubenswrapper[4892]: E0122 09:21:20.345410 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="util" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.345417 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="util" Jan 22 09:21:20 crc kubenswrapper[4892]: E0122 09:21:20.345428 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="extract" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.345437 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="extract" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.345561 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2da807-7b14-4384-bf1a-dcfad84a6a14" containerName="extract" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.346037 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-blljf" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.350898 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.350898 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.351622 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lwdfs" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.360813 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-blljf"] Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.447502 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8z7\" (UniqueName: \"kubernetes.io/projected/ebf0d927-7aa3-4f75-b5be-7037df253175-kube-api-access-8j8z7\") pod \"nmstate-operator-646758c888-blljf\" (UID: \"ebf0d927-7aa3-4f75-b5be-7037df253175\") " pod="openshift-nmstate/nmstate-operator-646758c888-blljf" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.548345 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j8z7\" (UniqueName: \"kubernetes.io/projected/ebf0d927-7aa3-4f75-b5be-7037df253175-kube-api-access-8j8z7\") pod \"nmstate-operator-646758c888-blljf\" (UID: \"ebf0d927-7aa3-4f75-b5be-7037df253175\") " pod="openshift-nmstate/nmstate-operator-646758c888-blljf" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.572087 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j8z7\" (UniqueName: \"kubernetes.io/projected/ebf0d927-7aa3-4f75-b5be-7037df253175-kube-api-access-8j8z7\") pod \"nmstate-operator-646758c888-blljf\" (UID: \"ebf0d927-7aa3-4f75-b5be-7037df253175\") " pod="openshift-nmstate/nmstate-operator-646758c888-blljf" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.664642 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-blljf" Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.848035 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-blljf"] Jan 22 09:21:20 crc kubenswrapper[4892]: W0122 09:21:20.855376 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf0d927_7aa3_4f75_b5be_7037df253175.slice/crio-d0b5ff7b20cdc7ef8cb8b5fab9dc310a28a5283bf7839ccad14048a34f06354a WatchSource:0}: Error finding container d0b5ff7b20cdc7ef8cb8b5fab9dc310a28a5283bf7839ccad14048a34f06354a: Status 404 returned error can't find the container with id d0b5ff7b20cdc7ef8cb8b5fab9dc310a28a5283bf7839ccad14048a34f06354a Jan 22 09:21:20 crc kubenswrapper[4892]: I0122 09:21:20.900076 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-blljf" event={"ID":"ebf0d927-7aa3-4f75-b5be-7037df253175","Type":"ContainerStarted","Data":"d0b5ff7b20cdc7ef8cb8b5fab9dc310a28a5283bf7839ccad14048a34f06354a"} Jan 22 09:21:36 crc kubenswrapper[4892]: I0122 09:21:36.990076 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-blljf" event={"ID":"ebf0d927-7aa3-4f75-b5be-7037df253175","Type":"ContainerStarted","Data":"caa04db08c796cbb2cda550d152f060f266a38bee830b4e36398072c091e16cf"} Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.003378 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-blljf" podStartSLOduration=1.487195529 podStartE2EDuration="17.003362964s" podCreationTimestamp="2026-01-22 09:21:20 +0000 UTC" firstStartedPulling="2026-01-22 09:21:20.857844245 +0000 UTC m=+650.701923308" lastFinishedPulling="2026-01-22 09:21:36.37401168 +0000 UTC m=+666.218090743" observedRunningTime="2026-01-22 09:21:37.00279202 +0000 UTC m=+666.846871083" watchObservedRunningTime="2026-01-22 09:21:37.003362964 +0000 UTC m=+666.847442027" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.831949 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wcg8m"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.832936 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.834658 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6gwhl" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.855881 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wcg8m"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.860203 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.861167 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.863278 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.864391 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-c4k72"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.865019 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.869179 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.968329 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.968997 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.972614 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-w6hmh" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.972685 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.972795 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.980614 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2"] Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984212 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-nmstate-lock\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984344 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-dbus-socket\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984381 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmfn\" (UniqueName: \"kubernetes.io/projected/9911d829-131f-4c59-9268-c0165a5f1126-kube-api-access-7hmfn\") pod \"nmstate-metrics-54757c584b-wcg8m\" (UID: \"9911d829-131f-4c59-9268-c0165a5f1126\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984408 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789vg\" (UniqueName: \"kubernetes.io/projected/21249177-5044-4f9b-a0dc-dcad499ec3ad-kube-api-access-789vg\") pod \"nmstate-webhook-8474b5b9d8-t6lmb\" (UID: \"21249177-5044-4f9b-a0dc-dcad499ec3ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984491 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-ovs-socket\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984613 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21249177-5044-4f9b-a0dc-dcad499ec3ad-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t6lmb\" (UID: \"21249177-5044-4f9b-a0dc-dcad499ec3ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:37 crc kubenswrapper[4892]: I0122 09:21:37.984684 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mkk\" (UniqueName: \"kubernetes.io/projected/76d51902-a31d-4cfd-aa0a-de6c055c79fd-kube-api-access-n6mkk\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086079 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmfn\" (UniqueName: \"kubernetes.io/projected/9911d829-131f-4c59-9268-c0165a5f1126-kube-api-access-7hmfn\") pod \"nmstate-metrics-54757c584b-wcg8m\" (UID: \"9911d829-131f-4c59-9268-c0165a5f1126\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086492 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789vg\" (UniqueName: \"kubernetes.io/projected/21249177-5044-4f9b-a0dc-dcad499ec3ad-kube-api-access-789vg\") pod \"nmstate-webhook-8474b5b9d8-t6lmb\" (UID: \"21249177-5044-4f9b-a0dc-dcad499ec3ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086536 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-ovs-socket\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086571 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21249177-5044-4f9b-a0dc-dcad499ec3ad-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t6lmb\" (UID: \"21249177-5044-4f9b-a0dc-dcad499ec3ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086617 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mkk\" (UniqueName: \"kubernetes.io/projected/76d51902-a31d-4cfd-aa0a-de6c055c79fd-kube-api-access-n6mkk\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086657 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-nmstate-lock\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086687 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8fm\" (UniqueName: \"kubernetes.io/projected/d0e47195-84b2-4249-8f2e-833525b47d1c-kube-api-access-js8fm\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086714 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d0e47195-84b2-4249-8f2e-833525b47d1c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086751 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-dbus-socket\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086777 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e47195-84b2-4249-8f2e-833525b47d1c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.086973 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-ovs-socket\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.087049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-nmstate-lock\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.087317 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/76d51902-a31d-4cfd-aa0a-de6c055c79fd-dbus-socket\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.095337 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21249177-5044-4f9b-a0dc-dcad499ec3ad-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t6lmb\" (UID: \"21249177-5044-4f9b-a0dc-dcad499ec3ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.113633 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mkk\" (UniqueName: \"kubernetes.io/projected/76d51902-a31d-4cfd-aa0a-de6c055c79fd-kube-api-access-n6mkk\") pod \"nmstate-handler-c4k72\" (UID: \"76d51902-a31d-4cfd-aa0a-de6c055c79fd\") " pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.116811 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789vg\" (UniqueName: \"kubernetes.io/projected/21249177-5044-4f9b-a0dc-dcad499ec3ad-kube-api-access-789vg\") pod \"nmstate-webhook-8474b5b9d8-t6lmb\" (UID: \"21249177-5044-4f9b-a0dc-dcad499ec3ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.117424 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmfn\" (UniqueName: \"kubernetes.io/projected/9911d829-131f-4c59-9268-c0165a5f1126-kube-api-access-7hmfn\") pod \"nmstate-metrics-54757c584b-wcg8m\" (UID: \"9911d829-131f-4c59-9268-c0165a5f1126\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.152612 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6485b95b4d-gzgsv"] Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.153287 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.168931 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6485b95b4d-gzgsv"] Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.180072 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.187588 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8fm\" (UniqueName: \"kubernetes.io/projected/d0e47195-84b2-4249-8f2e-833525b47d1c-kube-api-access-js8fm\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.187634 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d0e47195-84b2-4249-8f2e-833525b47d1c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.187672 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e47195-84b2-4249-8f2e-833525b47d1c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.188550 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d0e47195-84b2-4249-8f2e-833525b47d1c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.196675 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.199899 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e47195-84b2-4249-8f2e-833525b47d1c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.205426 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8fm\" (UniqueName: \"kubernetes.io/projected/d0e47195-84b2-4249-8f2e-833525b47d1c-kube-api-access-js8fm\") pod \"nmstate-console-plugin-7754f76f8b-5g6g2\" (UID: \"d0e47195-84b2-4249-8f2e-833525b47d1c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.231408 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:38 crc kubenswrapper[4892]: W0122 09:21:38.238288 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d51902_a31d_4cfd_aa0a_de6c055c79fd.slice/crio-dcaf16c652a93ec68a8f9e79bcc3253fea0aac7d3e39a5a93c5ecb92eb73472a WatchSource:0}: Error finding container dcaf16c652a93ec68a8f9e79bcc3253fea0aac7d3e39a5a93c5ecb92eb73472a: Status 404 returned error can't find the container with id dcaf16c652a93ec68a8f9e79bcc3253fea0aac7d3e39a5a93c5ecb92eb73472a Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.287501 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.288995 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-oauth-config\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.289201 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-oauth-serving-cert\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.289373 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-trusted-ca-bundle\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.289404 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-service-ca\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.289426 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-serving-cert\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.289497 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wdxh\" (UniqueName: \"kubernetes.io/projected/cac16e6c-abc2-4264-8935-a252fa9cdd06-kube-api-access-6wdxh\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.289519 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-config\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.368845 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wcg8m"] Jan 22 09:21:38 crc kubenswrapper[4892]: W0122 09:21:38.375717 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9911d829_131f_4c59_9268_c0165a5f1126.slice/crio-a6d11dda6bee9fff13dc28f8b7146485100de4e876ba0c906730285a5b0981af WatchSource:0}: Error finding container a6d11dda6bee9fff13dc28f8b7146485100de4e876ba0c906730285a5b0981af: Status 404 returned error can't find the container with id a6d11dda6bee9fff13dc28f8b7146485100de4e876ba0c906730285a5b0981af Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391073 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-service-ca\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391121 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-serving-cert\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391159 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wdxh\" (UniqueName: \"kubernetes.io/projected/cac16e6c-abc2-4264-8935-a252fa9cdd06-kube-api-access-6wdxh\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391217 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-config\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391322 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-oauth-config\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391375 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-oauth-serving-cert\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.391421 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-trusted-ca-bundle\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.392233 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-service-ca\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.392508 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-config\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.392819 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-oauth-serving-cert\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.394487 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac16e6c-abc2-4264-8935-a252fa9cdd06-trusted-ca-bundle\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.403873 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-oauth-config\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.403909 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cac16e6c-abc2-4264-8935-a252fa9cdd06-console-serving-cert\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.410375 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wdxh\" (UniqueName: \"kubernetes.io/projected/cac16e6c-abc2-4264-8935-a252fa9cdd06-kube-api-access-6wdxh\") pod \"console-6485b95b4d-gzgsv\" (UID: \"cac16e6c-abc2-4264-8935-a252fa9cdd06\") " pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.429795 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb"] Jan 22 09:21:38 crc kubenswrapper[4892]: W0122 09:21:38.437008 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21249177_5044_4f9b_a0dc_dcad499ec3ad.slice/crio-a07ee2528b5b6ea26f81e8de944f7c22c970b841e5940a36a6aaaf9a19368346 WatchSource:0}: Error finding container a07ee2528b5b6ea26f81e8de944f7c22c970b841e5940a36a6aaaf9a19368346: Status 404 returned error can't find the container with id a07ee2528b5b6ea26f81e8de944f7c22c970b841e5940a36a6aaaf9a19368346 Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.473642 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.488230 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2"] Jan 22 09:21:38 crc kubenswrapper[4892]: W0122 09:21:38.491927 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e47195_84b2_4249_8f2e_833525b47d1c.slice/crio-9461797a2c6edb47e82dc3dfb396d1288defaaddb354be2d3c08aff23b7fe00c WatchSource:0}: Error finding container 9461797a2c6edb47e82dc3dfb396d1288defaaddb354be2d3c08aff23b7fe00c: Status 404 returned error can't find the container with id 9461797a2c6edb47e82dc3dfb396d1288defaaddb354be2d3c08aff23b7fe00c Jan 22 09:21:38 crc kubenswrapper[4892]: I0122 09:21:38.872487 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6485b95b4d-gzgsv"] Jan 22 09:21:38 crc kubenswrapper[4892]: W0122 09:21:38.876849 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcac16e6c_abc2_4264_8935_a252fa9cdd06.slice/crio-3bc1163f68929049e87e25ac8154367f5da4d3319b9727ac5958132630b5045e WatchSource:0}: Error finding container 3bc1163f68929049e87e25ac8154367f5da4d3319b9727ac5958132630b5045e: Status 404 returned error can't find the container with id 3bc1163f68929049e87e25ac8154367f5da4d3319b9727ac5958132630b5045e Jan 22 09:21:39 crc kubenswrapper[4892]: I0122 09:21:39.001729 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6485b95b4d-gzgsv" event={"ID":"cac16e6c-abc2-4264-8935-a252fa9cdd06","Type":"ContainerStarted","Data":"3bc1163f68929049e87e25ac8154367f5da4d3319b9727ac5958132630b5045e"} Jan 22 09:21:39 crc kubenswrapper[4892]: I0122 09:21:39.003018 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c4k72" event={"ID":"76d51902-a31d-4cfd-aa0a-de6c055c79fd","Type":"ContainerStarted","Data":"dcaf16c652a93ec68a8f9e79bcc3253fea0aac7d3e39a5a93c5ecb92eb73472a"} Jan 22 09:21:39 crc kubenswrapper[4892]: I0122 09:21:39.004014 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" event={"ID":"21249177-5044-4f9b-a0dc-dcad499ec3ad","Type":"ContainerStarted","Data":"a07ee2528b5b6ea26f81e8de944f7c22c970b841e5940a36a6aaaf9a19368346"} Jan 22 09:21:39 crc kubenswrapper[4892]: I0122 09:21:39.004859 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" event={"ID":"d0e47195-84b2-4249-8f2e-833525b47d1c","Type":"ContainerStarted","Data":"9461797a2c6edb47e82dc3dfb396d1288defaaddb354be2d3c08aff23b7fe00c"} Jan 22 09:21:39 crc kubenswrapper[4892]: I0122 09:21:39.005685 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" event={"ID":"9911d829-131f-4c59-9268-c0165a5f1126","Type":"ContainerStarted","Data":"a6d11dda6bee9fff13dc28f8b7146485100de4e876ba0c906730285a5b0981af"} Jan 22 09:21:40 crc kubenswrapper[4892]: I0122 09:21:40.012801 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6485b95b4d-gzgsv" event={"ID":"cac16e6c-abc2-4264-8935-a252fa9cdd06","Type":"ContainerStarted","Data":"7c98016bcd344e644ae52f00ab174ef17dfc1abc17494d0908f36ead5066bc02"} Jan 22 09:21:40 crc kubenswrapper[4892]: I0122 09:21:40.035149 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6485b95b4d-gzgsv" podStartSLOduration=2.035129115 podStartE2EDuration="2.035129115s" podCreationTimestamp="2026-01-22 09:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:40.026980119 +0000 UTC m=+669.871059182" watchObservedRunningTime="2026-01-22 09:21:40.035129115 +0000 UTC m=+669.879208178" Jan 22 09:21:45 crc kubenswrapper[4892]: I0122 09:21:45.041335 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" event={"ID":"d0e47195-84b2-4249-8f2e-833525b47d1c","Type":"ContainerStarted","Data":"ed4a100c91101465de2dfb954950c2f310d38c4c1c5972a7c26db3a7520ec4ed"} Jan 22 09:21:45 crc kubenswrapper[4892]: I0122 09:21:45.042923 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c4k72" event={"ID":"76d51902-a31d-4cfd-aa0a-de6c055c79fd","Type":"ContainerStarted","Data":"becaba41f224e5cc54be3dbe8f00061f6b1fe361ba16ae962cbfcf94392619e7"} Jan 22 09:21:45 crc kubenswrapper[4892]: I0122 09:21:45.044575 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" event={"ID":"21249177-5044-4f9b-a0dc-dcad499ec3ad","Type":"ContainerStarted","Data":"b1266b560286e1e936bd89cf88be74491fbd3dad7cef502a5aa4dc02f44f2b6a"} Jan 22 09:21:46 crc kubenswrapper[4892]: I0122 09:21:46.050911 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" event={"ID":"9911d829-131f-4c59-9268-c0165a5f1126","Type":"ContainerStarted","Data":"265c7cfb3e63e4ac7d4053bc054f8f66b022e200132a9cbf905b1b5fdb2f8479"} Jan 22 09:21:46 crc kubenswrapper[4892]: I0122 09:21:46.051214 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:46 crc kubenswrapper[4892]: I0122 09:21:46.051557 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:21:46 crc kubenswrapper[4892]: I0122 09:21:46.088268 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" podStartSLOduration=3.002734714 podStartE2EDuration="9.088249115s" podCreationTimestamp="2026-01-22 09:21:37 +0000 UTC" firstStartedPulling="2026-01-22 09:21:38.439394118 +0000 UTC m=+668.283473171" lastFinishedPulling="2026-01-22 09:21:44.524908499 +0000 UTC m=+674.368987572" observedRunningTime="2026-01-22 09:21:46.084758281 +0000 UTC m=+675.928837354" watchObservedRunningTime="2026-01-22 09:21:46.088249115 +0000 UTC m=+675.932328178" Jan 22 09:21:46 crc kubenswrapper[4892]: I0122 09:21:46.089548 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5g6g2" podStartSLOduration=3.140658482 podStartE2EDuration="9.089540996s" podCreationTimestamp="2026-01-22 09:21:37 +0000 UTC" firstStartedPulling="2026-01-22 09:21:38.496959413 +0000 UTC m=+668.341038476" lastFinishedPulling="2026-01-22 09:21:44.445841927 +0000 UTC m=+674.289920990" observedRunningTime="2026-01-22 09:21:46.071047131 +0000 UTC m=+675.915126214" watchObservedRunningTime="2026-01-22 09:21:46.089540996 +0000 UTC m=+675.933620059" Jan 22 09:21:46 crc kubenswrapper[4892]: I0122 09:21:46.103347 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-c4k72" podStartSLOduration=2.889846108 podStartE2EDuration="9.103284137s" podCreationTimestamp="2026-01-22 09:21:37 +0000 UTC" firstStartedPulling="2026-01-22 09:21:38.243116865 +0000 UTC m=+668.087195928" lastFinishedPulling="2026-01-22 09:21:44.456554854 +0000 UTC m=+674.300633957" observedRunningTime="2026-01-22 09:21:46.101438292 +0000 UTC m=+675.945517365" watchObservedRunningTime="2026-01-22 09:21:46.103284137 +0000 UTC m=+675.947363200" Jan 22 09:21:48 crc kubenswrapper[4892]: I0122 09:21:48.068067 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" event={"ID":"9911d829-131f-4c59-9268-c0165a5f1126","Type":"ContainerStarted","Data":"85908d118ef91e3e24691091e79cc502a5208c57b3ea2394147add931d13daa1"} Jan 22 09:21:48 crc kubenswrapper[4892]: I0122 09:21:48.087056 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-wcg8m" podStartSLOduration=1.757524383 podStartE2EDuration="11.087041901s" podCreationTimestamp="2026-01-22 09:21:37 +0000 UTC" firstStartedPulling="2026-01-22 09:21:38.377802216 +0000 UTC m=+668.221881269" lastFinishedPulling="2026-01-22 09:21:47.707319724 +0000 UTC m=+677.551398787" observedRunningTime="2026-01-22 09:21:48.084535291 +0000 UTC m=+677.928614394" watchObservedRunningTime="2026-01-22 09:21:48.087041901 +0000 UTC m=+677.931120964" Jan 22 09:21:48 crc kubenswrapper[4892]: I0122 09:21:48.474329 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:48 crc kubenswrapper[4892]: I0122 09:21:48.474391 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:48 crc kubenswrapper[4892]: I0122 09:21:48.481345 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:49 crc kubenswrapper[4892]: I0122 09:21:49.075674 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6485b95b4d-gzgsv" Jan 22 09:21:49 crc kubenswrapper[4892]: I0122 09:21:49.134460 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-95xtn"] Jan 22 09:21:53 crc kubenswrapper[4892]: I0122 09:21:53.214320 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-c4k72" Jan 22 09:21:58 crc kubenswrapper[4892]: I0122 09:21:58.236936 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t6lmb" Jan 22 09:22:10 crc kubenswrapper[4892]: I0122 09:22:10.915960 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8"] Jan 22 09:22:10 crc kubenswrapper[4892]: I0122 09:22:10.917696 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:10 crc kubenswrapper[4892]: I0122 09:22:10.919741 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 09:22:10 crc kubenswrapper[4892]: I0122 09:22:10.925043 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8"] Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.017944 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzfd\" (UniqueName: \"kubernetes.io/projected/fe89e20a-62fc-4d26-ae68-73810243a106-kube-api-access-zvzfd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.018021 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.018109 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.119810 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzfd\" (UniqueName: \"kubernetes.io/projected/fe89e20a-62fc-4d26-ae68-73810243a106-kube-api-access-zvzfd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.119872 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.119903 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.120409 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.120580 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.148010 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzfd\" (UniqueName: \"kubernetes.io/projected/fe89e20a-62fc-4d26-ae68-73810243a106-kube-api-access-zvzfd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.247237 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:11 crc kubenswrapper[4892]: I0122 09:22:11.633226 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8"] Jan 22 09:22:12 crc kubenswrapper[4892]: I0122 09:22:12.195349 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" event={"ID":"fe89e20a-62fc-4d26-ae68-73810243a106","Type":"ContainerStarted","Data":"e37ab8d0a6e8737a0cec6c43c6d441119903e5d79357f163260c5334fd232a9e"} Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.173140 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-95xtn" podUID="ab72073f-69cb-4719-b896-54618a6925db" containerName="console" containerID="cri-o://a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2" gracePeriod=15 Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.210152 4892 generic.go:334] "Generic (PLEG): container finished" podID="fe89e20a-62fc-4d26-ae68-73810243a106" containerID="53a026d2a18ba1ca2c21ba8d4a89ded1fa68f6ff58b9edfc1d8f6b7939f7cb29" exitCode=0 Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.210204 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" event={"ID":"fe89e20a-62fc-4d26-ae68-73810243a106","Type":"ContainerDied","Data":"53a026d2a18ba1ca2c21ba8d4a89ded1fa68f6ff58b9edfc1d8f6b7939f7cb29"} Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.700205 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-95xtn_ab72073f-69cb-4719-b896-54618a6925db/console/0.log" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.700265 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872109 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-serving-cert\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872202 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-service-ca\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872238 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fm7c\" (UniqueName: \"kubernetes.io/projected/ab72073f-69cb-4719-b896-54618a6925db-kube-api-access-7fm7c\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872265 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-oauth-serving-cert\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872327 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-oauth-config\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872350 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-console-config\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.872400 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-trusted-ca-bundle\") pod \"ab72073f-69cb-4719-b896-54618a6925db\" (UID: \"ab72073f-69cb-4719-b896-54618a6925db\") " Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.873209 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-service-ca" (OuterVolumeSpecName: "service-ca") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.873241 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-console-config" (OuterVolumeSpecName: "console-config") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.873329 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.873359 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.877465 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab72073f-69cb-4719-b896-54618a6925db-kube-api-access-7fm7c" (OuterVolumeSpecName: "kube-api-access-7fm7c") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "kube-api-access-7fm7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.877508 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.877979 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ab72073f-69cb-4719-b896-54618a6925db" (UID: "ab72073f-69cb-4719-b896-54618a6925db"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974417 4892 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974473 4892 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974494 4892 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974512 4892 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab72073f-69cb-4719-b896-54618a6925db-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974531 4892 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974552 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fm7c\" (UniqueName: \"kubernetes.io/projected/ab72073f-69cb-4719-b896-54618a6925db-kube-api-access-7fm7c\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:14 crc kubenswrapper[4892]: I0122 09:22:14.974575 4892 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ab72073f-69cb-4719-b896-54618a6925db-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.218559 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-95xtn_ab72073f-69cb-4719-b896-54618a6925db/console/0.log" Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.218644 4892 generic.go:334] "Generic (PLEG): container finished" podID="ab72073f-69cb-4719-b896-54618a6925db" containerID="a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2" exitCode=2 Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.218687 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95xtn" event={"ID":"ab72073f-69cb-4719-b896-54618a6925db","Type":"ContainerDied","Data":"a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2"} Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.218730 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95xtn" event={"ID":"ab72073f-69cb-4719-b896-54618a6925db","Type":"ContainerDied","Data":"af947ddd1f69a52e1627614778a5dcb4d5bccc7fc996b43da36f5102463d7c2f"} Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.218735 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95xtn" Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.218759 4892 scope.go:117] "RemoveContainer" containerID="a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2" Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.250619 4892 scope.go:117] "RemoveContainer" containerID="a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2" Jan 22 09:22:15 crc kubenswrapper[4892]: E0122 09:22:15.251418 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2\": container with ID starting with a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2 not found: ID does not exist" containerID="a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2" Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.251478 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2"} err="failed to get container status \"a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2\": rpc error: code = NotFound desc = could not find container \"a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2\": container with ID starting with a67beadcdf48407f8325ab9e8ee85c8ec3b824c650824157f0be65277264a6a2 not found: ID does not exist" Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.259425 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-95xtn"] Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.262111 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-95xtn"] Jan 22 09:22:15 crc kubenswrapper[4892]: I0122 09:22:15.426739 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab72073f-69cb-4719-b896-54618a6925db" path="/var/lib/kubelet/pods/ab72073f-69cb-4719-b896-54618a6925db/volumes" Jan 22 09:22:18 crc kubenswrapper[4892]: I0122 09:22:18.249787 4892 generic.go:334] "Generic (PLEG): container finished" podID="fe89e20a-62fc-4d26-ae68-73810243a106" containerID="ee2e4329293d3730f4c3449c64b9abf30c41e0aa7a3b074ef2ecc92547fbaffc" exitCode=0 Jan 22 09:22:18 crc kubenswrapper[4892]: I0122 09:22:18.249867 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" event={"ID":"fe89e20a-62fc-4d26-ae68-73810243a106","Type":"ContainerDied","Data":"ee2e4329293d3730f4c3449c64b9abf30c41e0aa7a3b074ef2ecc92547fbaffc"} Jan 22 09:22:19 crc kubenswrapper[4892]: I0122 09:22:19.261401 4892 generic.go:334] "Generic (PLEG): container finished" podID="fe89e20a-62fc-4d26-ae68-73810243a106" containerID="c3bf138823f00f8d3f9da26adfe514edfc260701cae5b8833511d34a702d4e44" exitCode=0 Jan 22 09:22:19 crc kubenswrapper[4892]: I0122 09:22:19.261463 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" event={"ID":"fe89e20a-62fc-4d26-ae68-73810243a106","Type":"ContainerDied","Data":"c3bf138823f00f8d3f9da26adfe514edfc260701cae5b8833511d34a702d4e44"} Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.534403 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.647660 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-bundle\") pod \"fe89e20a-62fc-4d26-ae68-73810243a106\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.647805 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-util\") pod \"fe89e20a-62fc-4d26-ae68-73810243a106\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.647895 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvzfd\" (UniqueName: \"kubernetes.io/projected/fe89e20a-62fc-4d26-ae68-73810243a106-kube-api-access-zvzfd\") pod \"fe89e20a-62fc-4d26-ae68-73810243a106\" (UID: \"fe89e20a-62fc-4d26-ae68-73810243a106\") " Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.648913 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-bundle" (OuterVolumeSpecName: "bundle") pod "fe89e20a-62fc-4d26-ae68-73810243a106" (UID: "fe89e20a-62fc-4d26-ae68-73810243a106"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.655365 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe89e20a-62fc-4d26-ae68-73810243a106-kube-api-access-zvzfd" (OuterVolumeSpecName: "kube-api-access-zvzfd") pod "fe89e20a-62fc-4d26-ae68-73810243a106" (UID: "fe89e20a-62fc-4d26-ae68-73810243a106"). InnerVolumeSpecName "kube-api-access-zvzfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.657883 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-util" (OuterVolumeSpecName: "util") pod "fe89e20a-62fc-4d26-ae68-73810243a106" (UID: "fe89e20a-62fc-4d26-ae68-73810243a106"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.749523 4892 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.749584 4892 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe89e20a-62fc-4d26-ae68-73810243a106-util\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:20 crc kubenswrapper[4892]: I0122 09:22:20.749596 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvzfd\" (UniqueName: \"kubernetes.io/projected/fe89e20a-62fc-4d26-ae68-73810243a106-kube-api-access-zvzfd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:21 crc kubenswrapper[4892]: I0122 09:22:21.277072 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" event={"ID":"fe89e20a-62fc-4d26-ae68-73810243a106","Type":"ContainerDied","Data":"e37ab8d0a6e8737a0cec6c43c6d441119903e5d79357f163260c5334fd232a9e"} Jan 22 09:22:21 crc kubenswrapper[4892]: I0122 09:22:21.277114 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37ab8d0a6e8737a0cec6c43c6d441119903e5d79357f163260c5334fd232a9e" Jan 22 09:22:21 crc kubenswrapper[4892]: I0122 09:22:21.277132 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.229926 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg"] Jan 22 09:22:34 crc kubenswrapper[4892]: E0122 09:22:34.231646 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="extract" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.231730 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="extract" Jan 22 09:22:34 crc kubenswrapper[4892]: E0122 09:22:34.231817 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="pull" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.231877 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="pull" Jan 22 09:22:34 crc kubenswrapper[4892]: E0122 09:22:34.231933 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="util" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.231990 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="util" Jan 22 09:22:34 crc kubenswrapper[4892]: E0122 09:22:34.232052 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab72073f-69cb-4719-b896-54618a6925db" containerName="console" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.232105 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab72073f-69cb-4719-b896-54618a6925db" containerName="console" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.232252 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe89e20a-62fc-4d26-ae68-73810243a106" containerName="extract" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.232336 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab72073f-69cb-4719-b896-54618a6925db" containerName="console" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.232773 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.236649 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.236715 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.237686 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.237933 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-24jmt" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.238185 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.252682 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg"] Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.413323 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e7c48f-6e23-4156-b679-30f2d9735501-apiservice-cert\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.413695 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmlkj\" (UniqueName: \"kubernetes.io/projected/e2e7c48f-6e23-4156-b679-30f2d9735501-kube-api-access-kmlkj\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.413759 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e7c48f-6e23-4156-b679-30f2d9735501-webhook-cert\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.514618 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e7c48f-6e23-4156-b679-30f2d9735501-apiservice-cert\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.514888 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmlkj\" (UniqueName: \"kubernetes.io/projected/e2e7c48f-6e23-4156-b679-30f2d9735501-kube-api-access-kmlkj\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.515011 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e7c48f-6e23-4156-b679-30f2d9735501-webhook-cert\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.520685 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e7c48f-6e23-4156-b679-30f2d9735501-apiservice-cert\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.521308 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e7c48f-6e23-4156-b679-30f2d9735501-webhook-cert\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.536228 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmlkj\" (UniqueName: \"kubernetes.io/projected/e2e7c48f-6e23-4156-b679-30f2d9735501-kube-api-access-kmlkj\") pod \"metallb-operator-controller-manager-6744fff56c-5c2wg\" (UID: \"e2e7c48f-6e23-4156-b679-30f2d9735501\") " pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.548045 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.565550 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22"] Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.566300 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.569501 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.569870 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gdmdq" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.571468 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.639731 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22"] Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.640414 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b49a3e83-8e00-4934-8968-97d1905959d0-apiservice-cert\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.640451 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnv8\" (UniqueName: \"kubernetes.io/projected/b49a3e83-8e00-4934-8968-97d1905959d0-kube-api-access-rdnv8\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.640497 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b49a3e83-8e00-4934-8968-97d1905959d0-webhook-cert\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.741415 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b49a3e83-8e00-4934-8968-97d1905959d0-webhook-cert\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.741495 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b49a3e83-8e00-4934-8968-97d1905959d0-apiservice-cert\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.741529 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnv8\" (UniqueName: \"kubernetes.io/projected/b49a3e83-8e00-4934-8968-97d1905959d0-kube-api-access-rdnv8\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.744875 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b49a3e83-8e00-4934-8968-97d1905959d0-apiservice-cert\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.762949 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b49a3e83-8e00-4934-8968-97d1905959d0-webhook-cert\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.772115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnv8\" (UniqueName: \"kubernetes.io/projected/b49a3e83-8e00-4934-8968-97d1905959d0-kube-api-access-rdnv8\") pod \"metallb-operator-webhook-server-5bdbd58466-bwr22\" (UID: \"b49a3e83-8e00-4934-8968-97d1905959d0\") " pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.830547 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg"] Jan 22 09:22:34 crc kubenswrapper[4892]: W0122 09:22:34.836155 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e7c48f_6e23_4156_b679_30f2d9735501.slice/crio-b85370634b0625b862b44d9a9bffbe125588638c727407d4f75649716dc0a17d WatchSource:0}: Error finding container b85370634b0625b862b44d9a9bffbe125588638c727407d4f75649716dc0a17d: Status 404 returned error can't find the container with id b85370634b0625b862b44d9a9bffbe125588638c727407d4f75649716dc0a17d Jan 22 09:22:34 crc kubenswrapper[4892]: I0122 09:22:34.961175 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:35 crc kubenswrapper[4892]: I0122 09:22:35.353518 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" event={"ID":"e2e7c48f-6e23-4156-b679-30f2d9735501","Type":"ContainerStarted","Data":"b85370634b0625b862b44d9a9bffbe125588638c727407d4f75649716dc0a17d"} Jan 22 09:22:35 crc kubenswrapper[4892]: I0122 09:22:35.387711 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22"] Jan 22 09:22:36 crc kubenswrapper[4892]: I0122 09:22:36.361389 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" event={"ID":"b49a3e83-8e00-4934-8968-97d1905959d0","Type":"ContainerStarted","Data":"41b56614da508b485ceac2dc588829def573f54b21d7c77e866ccf0d0aca83c2"} Jan 22 09:22:40 crc kubenswrapper[4892]: I0122 09:22:40.386046 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" event={"ID":"b49a3e83-8e00-4934-8968-97d1905959d0","Type":"ContainerStarted","Data":"6c369f59a1695113c4713d3dd0909c8b9f21fc72c170ecd3fdc583bb77f0b73b"} Jan 22 09:22:40 crc kubenswrapper[4892]: I0122 09:22:40.387452 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:40 crc kubenswrapper[4892]: I0122 09:22:40.404582 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" podStartSLOduration=2.392858364 podStartE2EDuration="6.404565871s" podCreationTimestamp="2026-01-22 09:22:34 +0000 UTC" firstStartedPulling="2026-01-22 09:22:35.412419714 +0000 UTC m=+725.256498777" lastFinishedPulling="2026-01-22 09:22:39.424127221 +0000 UTC m=+729.268206284" observedRunningTime="2026-01-22 09:22:40.402727766 +0000 UTC m=+730.246806829" watchObservedRunningTime="2026-01-22 09:22:40.404565871 +0000 UTC m=+730.248644934" Jan 22 09:22:49 crc kubenswrapper[4892]: I0122 09:22:49.466566 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" event={"ID":"e2e7c48f-6e23-4156-b679-30f2d9735501","Type":"ContainerStarted","Data":"1cc4033af64884ae9d5a3636f54d514e4861fd1cfd7fd1e48b77cbe437abaa3e"} Jan 22 09:22:49 crc kubenswrapper[4892]: I0122 09:22:49.467187 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:22:54 crc kubenswrapper[4892]: I0122 09:22:54.965919 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5bdbd58466-bwr22" Jan 22 09:22:54 crc kubenswrapper[4892]: I0122 09:22:54.983876 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" podStartSLOduration=6.596792808 podStartE2EDuration="20.98385635s" podCreationTimestamp="2026-01-22 09:22:34 +0000 UTC" firstStartedPulling="2026-01-22 09:22:34.839491747 +0000 UTC m=+724.683570810" lastFinishedPulling="2026-01-22 09:22:49.226555289 +0000 UTC m=+739.070634352" observedRunningTime="2026-01-22 09:22:49.498480736 +0000 UTC m=+739.342559809" watchObservedRunningTime="2026-01-22 09:22:54.98385635 +0000 UTC m=+744.827935413" Jan 22 09:23:11 crc kubenswrapper[4892]: I0122 09:23:11.335661 4892 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:23:16 crc kubenswrapper[4892]: I0122 09:23:16.323263 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:23:16 crc kubenswrapper[4892]: I0122 09:23:16.324465 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:23:24 crc kubenswrapper[4892]: I0122 09:23:24.551391 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6744fff56c-5c2wg" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.227184 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-szhls"] Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.229299 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.231572 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.231696 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vtsv8" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.232059 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.232274 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6"] Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.241257 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.241140 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6"] Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.245173 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251688 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/667e6efb-6488-461d-8e5f-380e05c4956e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251747 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-frr-conf\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251793 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-metrics\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251818 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjg7\" (UniqueName: \"kubernetes.io/projected/667e6efb-6488-461d-8e5f-380e05c4956e-kube-api-access-fvjg7\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251860 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f101c21-435f-4ede-8170-a8d399e50580-metrics-certs\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251890 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-reloader\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251917 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-frr-sockets\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251944 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flnf\" (UniqueName: \"kubernetes.io/projected/0f101c21-435f-4ede-8170-a8d399e50580-kube-api-access-2flnf\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.251962 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0f101c21-435f-4ede-8170-a8d399e50580-frr-startup\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.321251 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2bjk4"] Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.322193 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.324013 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nbxlk" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.324250 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.324450 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.329166 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.337914 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-szvst"] Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.338789 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.343694 4892 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.352879 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjb6\" (UniqueName: \"kubernetes.io/projected/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-kube-api-access-5cjb6\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.352921 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-reloader\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.352945 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-cert\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.352966 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-frr-sockets\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.352966 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-szvst"] Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.352991 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flnf\" (UniqueName: \"kubernetes.io/projected/0f101c21-435f-4ede-8170-a8d399e50580-kube-api-access-2flnf\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353007 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvgx\" (UniqueName: \"kubernetes.io/projected/e6301fe9-08d8-4bac-87e9-227fcc218129-kube-api-access-kqvgx\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353027 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0f101c21-435f-4ede-8170-a8d399e50580-frr-startup\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353043 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-metrics-certs\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353131 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-metrics-certs\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353160 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-metallb-excludel2\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353241 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/667e6efb-6488-461d-8e5f-380e05c4956e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353278 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-frr-conf\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353324 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353386 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-metrics\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353419 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjg7\" (UniqueName: \"kubernetes.io/projected/667e6efb-6488-461d-8e5f-380e05c4956e-kube-api-access-fvjg7\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353444 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f101c21-435f-4ede-8170-a8d399e50580-metrics-certs\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353451 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-reloader\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.353580 4892 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353605 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-frr-sockets\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.353636 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/667e6efb-6488-461d-8e5f-380e05c4956e-cert podName:667e6efb-6488-461d-8e5f-380e05c4956e nodeName:}" failed. No retries permitted until 2026-01-22 09:23:25.853616709 +0000 UTC m=+775.697695782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/667e6efb-6488-461d-8e5f-380e05c4956e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-drtv6" (UID: "667e6efb-6488-461d-8e5f-380e05c4956e") : secret "frr-k8s-webhook-server-cert" not found Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353865 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-frr-conf\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.353867 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0f101c21-435f-4ede-8170-a8d399e50580-metrics\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.354802 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0f101c21-435f-4ede-8170-a8d399e50580-frr-startup\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.361162 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f101c21-435f-4ede-8170-a8d399e50580-metrics-certs\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.377067 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjg7\" (UniqueName: \"kubernetes.io/projected/667e6efb-6488-461d-8e5f-380e05c4956e-kube-api-access-fvjg7\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.383442 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flnf\" (UniqueName: \"kubernetes.io/projected/0f101c21-435f-4ede-8170-a8d399e50580-kube-api-access-2flnf\") pod \"frr-k8s-szhls\" (UID: \"0f101c21-435f-4ede-8170-a8d399e50580\") " pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.453987 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-metrics-certs\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.454028 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-metrics-certs\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.454043 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-metallb-excludel2\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.454096 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.454154 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjb6\" (UniqueName: \"kubernetes.io/projected/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-kube-api-access-5cjb6\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.454174 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-cert\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.454201 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvgx\" (UniqueName: \"kubernetes.io/projected/e6301fe9-08d8-4bac-87e9-227fcc218129-kube-api-access-kqvgx\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.454557 4892 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.454599 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-metrics-certs podName:e6301fe9-08d8-4bac-87e9-227fcc218129 nodeName:}" failed. No retries permitted until 2026-01-22 09:23:25.954587465 +0000 UTC m=+775.798666518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-metrics-certs") pod "controller-6968d8fdc4-szvst" (UID: "e6301fe9-08d8-4bac-87e9-227fcc218129") : secret "controller-certs-secret" not found Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.455265 4892 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.455448 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist podName:caa80b1d-b3d2-47d6-99e6-73420bc5f61d nodeName:}" failed. No retries permitted until 2026-01-22 09:23:25.955425355 +0000 UTC m=+775.799504488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist") pod "speaker-2bjk4" (UID: "caa80b1d-b3d2-47d6-99e6-73420bc5f61d") : secret "metallb-memberlist" not found Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.456192 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-metallb-excludel2\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.458005 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-metrics-certs\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.458451 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-cert\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.469316 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjb6\" (UniqueName: \"kubernetes.io/projected/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-kube-api-access-5cjb6\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.469904 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvgx\" (UniqueName: \"kubernetes.io/projected/e6301fe9-08d8-4bac-87e9-227fcc218129-kube-api-access-kqvgx\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.550102 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.858664 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/667e6efb-6488-461d-8e5f-380e05c4956e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.862809 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/667e6efb-6488-461d-8e5f-380e05c4956e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-drtv6\" (UID: \"667e6efb-6488-461d-8e5f-380e05c4956e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.864238 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.959373 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.959677 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-metrics-certs\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.960495 4892 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 09:23:25 crc kubenswrapper[4892]: E0122 09:23:25.960576 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist podName:caa80b1d-b3d2-47d6-99e6-73420bc5f61d nodeName:}" failed. No retries permitted until 2026-01-22 09:23:26.960556031 +0000 UTC m=+776.804635094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist") pod "speaker-2bjk4" (UID: "caa80b1d-b3d2-47d6-99e6-73420bc5f61d") : secret "metallb-memberlist" not found Jan 22 09:23:25 crc kubenswrapper[4892]: I0122 09:23:25.966713 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6301fe9-08d8-4bac-87e9-227fcc218129-metrics-certs\") pod \"controller-6968d8fdc4-szvst\" (UID: \"e6301fe9-08d8-4bac-87e9-227fcc218129\") " pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.102500 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6"] Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.262350 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.459406 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-szvst"] Jan 22 09:23:26 crc kubenswrapper[4892]: W0122 09:23:26.472207 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6301fe9_08d8_4bac_87e9_227fcc218129.slice/crio-3d9f0f5f37c3408dc032192ec4edb48a3e02a15a4820ee5455fc042729311609 WatchSource:0}: Error finding container 3d9f0f5f37c3408dc032192ec4edb48a3e02a15a4820ee5455fc042729311609: Status 404 returned error can't find the container with id 3d9f0f5f37c3408dc032192ec4edb48a3e02a15a4820ee5455fc042729311609 Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.668754 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" event={"ID":"667e6efb-6488-461d-8e5f-380e05c4956e","Type":"ContainerStarted","Data":"ebeff0a299f26ee7bd0cbe18d876b426639b2d0261e1e4a10a3d1deaad7f2d64"} Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.670878 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"60a812a0546857669304adf5fb39b69b96dcc7b2a8dd2f57384e1c774ec148ba"} Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.673051 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-szvst" event={"ID":"e6301fe9-08d8-4bac-87e9-227fcc218129","Type":"ContainerStarted","Data":"f589e2ea50c9861615e37f16737f3b1d29b14ce314e68ae56d99ca0ed0c823fe"} Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.673086 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-szvst" event={"ID":"e6301fe9-08d8-4bac-87e9-227fcc218129","Type":"ContainerStarted","Data":"3d9f0f5f37c3408dc032192ec4edb48a3e02a15a4820ee5455fc042729311609"} Jan 22 09:23:26 crc kubenswrapper[4892]: I0122 09:23:26.991630 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.010931 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/caa80b1d-b3d2-47d6-99e6-73420bc5f61d-memberlist\") pod \"speaker-2bjk4\" (UID: \"caa80b1d-b3d2-47d6-99e6-73420bc5f61d\") " pod="metallb-system/speaker-2bjk4" Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.137309 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2bjk4" Jan 22 09:23:27 crc kubenswrapper[4892]: W0122 09:23:27.158218 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaa80b1d_b3d2_47d6_99e6_73420bc5f61d.slice/crio-4a8f8611c4fef1448a30be69957c90f300c2d704902d73a1030dd2c00797ac1a WatchSource:0}: Error finding container 4a8f8611c4fef1448a30be69957c90f300c2d704902d73a1030dd2c00797ac1a: Status 404 returned error can't find the container with id 4a8f8611c4fef1448a30be69957c90f300c2d704902d73a1030dd2c00797ac1a Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.684883 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2bjk4" event={"ID":"caa80b1d-b3d2-47d6-99e6-73420bc5f61d","Type":"ContainerStarted","Data":"8cb02a7f072095ade9908b3d8f2abd373ff1ce2f5aef059036e68eb5745c0bfa"} Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.684943 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2bjk4" event={"ID":"caa80b1d-b3d2-47d6-99e6-73420bc5f61d","Type":"ContainerStarted","Data":"b9962cbcdab088ff7b85c4f5d547c76878859c9930283505ccb326acc848199f"} Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.684958 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2bjk4" event={"ID":"caa80b1d-b3d2-47d6-99e6-73420bc5f61d","Type":"ContainerStarted","Data":"4a8f8611c4fef1448a30be69957c90f300c2d704902d73a1030dd2c00797ac1a"} Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.685180 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2bjk4" Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.690399 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-szvst" event={"ID":"e6301fe9-08d8-4bac-87e9-227fcc218129","Type":"ContainerStarted","Data":"574cf3f6a640a2bd4507a9b66ee75ce5ed40d78a2f9c8e9b144fcfbf91ea441f"} Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.691058 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.725654 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2bjk4" podStartSLOduration=2.725639487 podStartE2EDuration="2.725639487s" podCreationTimestamp="2026-01-22 09:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:27.706542105 +0000 UTC m=+777.550621168" watchObservedRunningTime="2026-01-22 09:23:27.725639487 +0000 UTC m=+777.569718540" Jan 22 09:23:27 crc kubenswrapper[4892]: I0122 09:23:27.728604 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-szvst" podStartSLOduration=2.728592089 podStartE2EDuration="2.728592089s" podCreationTimestamp="2026-01-22 09:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:27.724888909 +0000 UTC m=+777.568967972" watchObservedRunningTime="2026-01-22 09:23:27.728592089 +0000 UTC m=+777.572671152" Jan 22 09:23:33 crc kubenswrapper[4892]: I0122 09:23:33.727723 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" event={"ID":"667e6efb-6488-461d-8e5f-380e05c4956e","Type":"ContainerStarted","Data":"44f28cd957fa9d34d2981840a21ad836a1619a6f3deaad514756b64ad6810789"} Jan 22 09:23:33 crc kubenswrapper[4892]: I0122 09:23:33.728206 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:33 crc kubenswrapper[4892]: I0122 09:23:33.729485 4892 generic.go:334] "Generic (PLEG): container finished" podID="0f101c21-435f-4ede-8170-a8d399e50580" containerID="e458be6d0799d3605aa3c38ccd93fd1e453bec7b55e0365899ecdbdedb0cfc5c" exitCode=0 Jan 22 09:23:33 crc kubenswrapper[4892]: I0122 09:23:33.729504 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerDied","Data":"e458be6d0799d3605aa3c38ccd93fd1e453bec7b55e0365899ecdbdedb0cfc5c"} Jan 22 09:23:33 crc kubenswrapper[4892]: I0122 09:23:33.746115 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" podStartSLOduration=1.849291469 podStartE2EDuration="8.746096642s" podCreationTimestamp="2026-01-22 09:23:25 +0000 UTC" firstStartedPulling="2026-01-22 09:23:26.111330863 +0000 UTC m=+775.955409926" lastFinishedPulling="2026-01-22 09:23:33.008136036 +0000 UTC m=+782.852215099" observedRunningTime="2026-01-22 09:23:33.744660387 +0000 UTC m=+783.588739450" watchObservedRunningTime="2026-01-22 09:23:33.746096642 +0000 UTC m=+783.590175705" Jan 22 09:23:34 crc kubenswrapper[4892]: I0122 09:23:34.738802 4892 generic.go:334] "Generic (PLEG): container finished" podID="0f101c21-435f-4ede-8170-a8d399e50580" containerID="9dfb2d8a4bdcc2a9933b02364cd62e1f17be6eb540b79cb5defbe0b9e845a80c" exitCode=0 Jan 22 09:23:34 crc kubenswrapper[4892]: I0122 09:23:34.738878 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerDied","Data":"9dfb2d8a4bdcc2a9933b02364cd62e1f17be6eb540b79cb5defbe0b9e845a80c"} Jan 22 09:23:35 crc kubenswrapper[4892]: I0122 09:23:35.745868 4892 generic.go:334] "Generic (PLEG): container finished" podID="0f101c21-435f-4ede-8170-a8d399e50580" containerID="c351d804e40cdeca9fec5aaa01142a9c3ede32c7e914459cb6c30b95c7e5ded1" exitCode=0 Jan 22 09:23:35 crc kubenswrapper[4892]: I0122 09:23:35.745918 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerDied","Data":"c351d804e40cdeca9fec5aaa01142a9c3ede32c7e914459cb6c30b95c7e5ded1"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.266990 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-szvst" Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759076 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"13f596093473ba97294be05d7b8fba9fff3364f43f4749f6ba7e67e722beb9a8"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759117 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"6d11f1a218c5ed2d2081d5d5be5f970808ede899a5a6422be2f1f52ac5ea72e2"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759128 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"9fa1dc85bf8c8c742e973d7a19f4da693916ed5b54ed2575e84ff0a173a34e82"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759138 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"d7fe79e9a124db58696b37eaad50f199a4d729d3fd1ad87352731578528f1dc0"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759146 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"59b3a7e31e34c06b4e4d9a64a88d772897058e394c3410c2eec01dd39ed19746"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759154 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szhls" event={"ID":"0f101c21-435f-4ede-8170-a8d399e50580","Type":"ContainerStarted","Data":"6d00a821282db0944fac6d0efc6c1de0e6989cef94350b47a008960e04612aa1"} Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.759347 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:36 crc kubenswrapper[4892]: I0122 09:23:36.785465 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-szhls" podStartSLOduration=5.060457554 podStartE2EDuration="11.785438425s" podCreationTimestamp="2026-01-22 09:23:25 +0000 UTC" firstStartedPulling="2026-01-22 09:23:26.303836286 +0000 UTC m=+776.147915359" lastFinishedPulling="2026-01-22 09:23:33.028817167 +0000 UTC m=+782.872896230" observedRunningTime="2026-01-22 09:23:36.780054985 +0000 UTC m=+786.624134118" watchObservedRunningTime="2026-01-22 09:23:36.785438425 +0000 UTC m=+786.629517508" Jan 22 09:23:37 crc kubenswrapper[4892]: I0122 09:23:37.141275 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2bjk4" Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.843638 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r5zww"] Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.844877 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.847469 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xkntw" Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.847525 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.852378 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.860266 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r5zww"] Jan 22 09:23:39 crc kubenswrapper[4892]: I0122 09:23:39.969464 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-499lp\" (UniqueName: \"kubernetes.io/projected/eb491f85-904c-4da8-a541-8dfa6921776b-kube-api-access-499lp\") pod \"openstack-operator-index-r5zww\" (UID: \"eb491f85-904c-4da8-a541-8dfa6921776b\") " pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.071210 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-499lp\" (UniqueName: \"kubernetes.io/projected/eb491f85-904c-4da8-a541-8dfa6921776b-kube-api-access-499lp\") pod \"openstack-operator-index-r5zww\" (UID: \"eb491f85-904c-4da8-a541-8dfa6921776b\") " pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.091561 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-499lp\" (UniqueName: \"kubernetes.io/projected/eb491f85-904c-4da8-a541-8dfa6921776b-kube-api-access-499lp\") pod \"openstack-operator-index-r5zww\" (UID: \"eb491f85-904c-4da8-a541-8dfa6921776b\") " pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.161695 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.380930 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r5zww"] Jan 22 09:23:40 crc kubenswrapper[4892]: W0122 09:23:40.388499 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb491f85_904c_4da8_a541_8dfa6921776b.slice/crio-05a421e7770c120d5ba5e884d521f23561a188ce6d1501b406b3c7a6788cb4ef WatchSource:0}: Error finding container 05a421e7770c120d5ba5e884d521f23561a188ce6d1501b406b3c7a6788cb4ef: Status 404 returned error can't find the container with id 05a421e7770c120d5ba5e884d521f23561a188ce6d1501b406b3c7a6788cb4ef Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.550996 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.621513 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:40 crc kubenswrapper[4892]: I0122 09:23:40.803628 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r5zww" event={"ID":"eb491f85-904c-4da8-a541-8dfa6921776b","Type":"ContainerStarted","Data":"05a421e7770c120d5ba5e884d521f23561a188ce6d1501b406b3c7a6788cb4ef"} Jan 22 09:23:42 crc kubenswrapper[4892]: I0122 09:23:42.816160 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r5zww" event={"ID":"eb491f85-904c-4da8-a541-8dfa6921776b","Type":"ContainerStarted","Data":"afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c"} Jan 22 09:23:42 crc kubenswrapper[4892]: I0122 09:23:42.835250 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r5zww" podStartSLOduration=1.864407131 podStartE2EDuration="3.835218531s" podCreationTimestamp="2026-01-22 09:23:39 +0000 UTC" firstStartedPulling="2026-01-22 09:23:40.390940822 +0000 UTC m=+790.235019885" lastFinishedPulling="2026-01-22 09:23:42.361752222 +0000 UTC m=+792.205831285" observedRunningTime="2026-01-22 09:23:42.832586307 +0000 UTC m=+792.676665430" watchObservedRunningTime="2026-01-22 09:23:42.835218531 +0000 UTC m=+792.679297624" Jan 22 09:23:43 crc kubenswrapper[4892]: I0122 09:23:43.215277 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r5zww"] Jan 22 09:23:43 crc kubenswrapper[4892]: I0122 09:23:43.819623 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hq2gz"] Jan 22 09:23:43 crc kubenswrapper[4892]: I0122 09:23:43.822463 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:43 crc kubenswrapper[4892]: I0122 09:23:43.842418 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hq2gz"] Jan 22 09:23:43 crc kubenswrapper[4892]: I0122 09:23:43.928799 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqxj\" (UniqueName: \"kubernetes.io/projected/016ec7ec-1244-47ab-81ba-957ed4b83b4f-kube-api-access-zqqxj\") pod \"openstack-operator-index-hq2gz\" (UID: \"016ec7ec-1244-47ab-81ba-957ed4b83b4f\") " pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:44 crc kubenswrapper[4892]: I0122 09:23:44.029878 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqxj\" (UniqueName: \"kubernetes.io/projected/016ec7ec-1244-47ab-81ba-957ed4b83b4f-kube-api-access-zqqxj\") pod \"openstack-operator-index-hq2gz\" (UID: \"016ec7ec-1244-47ab-81ba-957ed4b83b4f\") " pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:44 crc kubenswrapper[4892]: I0122 09:23:44.053076 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqxj\" (UniqueName: \"kubernetes.io/projected/016ec7ec-1244-47ab-81ba-957ed4b83b4f-kube-api-access-zqqxj\") pod \"openstack-operator-index-hq2gz\" (UID: \"016ec7ec-1244-47ab-81ba-957ed4b83b4f\") " pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:44 crc kubenswrapper[4892]: I0122 09:23:44.158195 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:44 crc kubenswrapper[4892]: I0122 09:23:44.394224 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hq2gz"] Jan 22 09:23:44 crc kubenswrapper[4892]: W0122 09:23:44.402059 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016ec7ec_1244_47ab_81ba_957ed4b83b4f.slice/crio-8a491b9975681cdd72554c8cf3e40db1c6a25c1e397ce82f2bb74a9d34df962f WatchSource:0}: Error finding container 8a491b9975681cdd72554c8cf3e40db1c6a25c1e397ce82f2bb74a9d34df962f: Status 404 returned error can't find the container with id 8a491b9975681cdd72554c8cf3e40db1c6a25c1e397ce82f2bb74a9d34df962f Jan 22 09:23:44 crc kubenswrapper[4892]: I0122 09:23:44.828943 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hq2gz" event={"ID":"016ec7ec-1244-47ab-81ba-957ed4b83b4f","Type":"ContainerStarted","Data":"8a491b9975681cdd72554c8cf3e40db1c6a25c1e397ce82f2bb74a9d34df962f"} Jan 22 09:23:44 crc kubenswrapper[4892]: I0122 09:23:44.829053 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-r5zww" podUID="eb491f85-904c-4da8-a541-8dfa6921776b" containerName="registry-server" containerID="cri-o://afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c" gracePeriod=2 Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.186060 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.356465 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-499lp\" (UniqueName: \"kubernetes.io/projected/eb491f85-904c-4da8-a541-8dfa6921776b-kube-api-access-499lp\") pod \"eb491f85-904c-4da8-a541-8dfa6921776b\" (UID: \"eb491f85-904c-4da8-a541-8dfa6921776b\") " Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.361230 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb491f85-904c-4da8-a541-8dfa6921776b-kube-api-access-499lp" (OuterVolumeSpecName: "kube-api-access-499lp") pod "eb491f85-904c-4da8-a541-8dfa6921776b" (UID: "eb491f85-904c-4da8-a541-8dfa6921776b"). InnerVolumeSpecName "kube-api-access-499lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.458761 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-499lp\" (UniqueName: \"kubernetes.io/projected/eb491f85-904c-4da8-a541-8dfa6921776b-kube-api-access-499lp\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.554317 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-szhls" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.838911 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hq2gz" event={"ID":"016ec7ec-1244-47ab-81ba-957ed4b83b4f","Type":"ContainerStarted","Data":"a68bf1aa7af2d051b0533fde7bbd4782fcc72b080289913fd858424bdad4fe37"} Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.842890 4892 generic.go:334] "Generic (PLEG): container finished" podID="eb491f85-904c-4da8-a541-8dfa6921776b" containerID="afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c" exitCode=0 Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.842934 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r5zww" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.842975 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r5zww" event={"ID":"eb491f85-904c-4da8-a541-8dfa6921776b","Type":"ContainerDied","Data":"afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c"} Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.844368 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r5zww" event={"ID":"eb491f85-904c-4da8-a541-8dfa6921776b","Type":"ContainerDied","Data":"05a421e7770c120d5ba5e884d521f23561a188ce6d1501b406b3c7a6788cb4ef"} Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.844633 4892 scope.go:117] "RemoveContainer" containerID="afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.878785 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-drtv6" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.880843 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hq2gz" podStartSLOduration=2.443147663 podStartE2EDuration="2.880815155s" podCreationTimestamp="2026-01-22 09:23:43 +0000 UTC" firstStartedPulling="2026-01-22 09:23:44.408461379 +0000 UTC m=+794.252540432" lastFinishedPulling="2026-01-22 09:23:44.846128831 +0000 UTC m=+794.690207924" observedRunningTime="2026-01-22 09:23:45.868891866 +0000 UTC m=+795.712970969" watchObservedRunningTime="2026-01-22 09:23:45.880815155 +0000 UTC m=+795.724894228" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.883120 4892 scope.go:117] "RemoveContainer" containerID="afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c" Jan 22 09:23:45 crc kubenswrapper[4892]: E0122 09:23:45.883764 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c\": container with ID starting with afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c not found: ID does not exist" containerID="afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.883822 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c"} err="failed to get container status \"afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c\": rpc error: code = NotFound desc = could not find container \"afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c\": container with ID starting with afe19628120515e98ba3779ef940040280b32c9ac48f5b0d22cb66b97d15ce9c not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.897522 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r5zww"] Jan 22 09:23:45 crc kubenswrapper[4892]: I0122 09:23:45.903165 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-r5zww"] Jan 22 09:23:46 crc kubenswrapper[4892]: I0122 09:23:46.323308 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:23:46 crc kubenswrapper[4892]: I0122 09:23:46.323796 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:23:47 crc kubenswrapper[4892]: I0122 09:23:47.424472 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb491f85-904c-4da8-a541-8dfa6921776b" path="/var/lib/kubelet/pods/eb491f85-904c-4da8-a541-8dfa6921776b/volumes" Jan 22 09:23:54 crc kubenswrapper[4892]: I0122 09:23:54.158725 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:54 crc kubenswrapper[4892]: I0122 09:23:54.159266 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:54 crc kubenswrapper[4892]: I0122 09:23:54.187079 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:54 crc kubenswrapper[4892]: I0122 09:23:54.942983 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hq2gz" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.264251 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd"] Jan 22 09:23:56 crc kubenswrapper[4892]: E0122 09:23:56.265005 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb491f85-904c-4da8-a541-8dfa6921776b" containerName="registry-server" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.265029 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb491f85-904c-4da8-a541-8dfa6921776b" containerName="registry-server" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.265311 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb491f85-904c-4da8-a541-8dfa6921776b" containerName="registry-server" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.267059 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.270317 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fsz58" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.273601 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd"] Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.317261 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ccgq\" (UniqueName: \"kubernetes.io/projected/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-kube-api-access-7ccgq\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.317351 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-bundle\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.317506 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-util\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.419387 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-bundle\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.419596 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-util\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.419725 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ccgq\" (UniqueName: \"kubernetes.io/projected/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-kube-api-access-7ccgq\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.420134 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-util\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.420207 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-bundle\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.447946 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ccgq\" (UniqueName: \"kubernetes.io/projected/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-kube-api-access-7ccgq\") pod \"fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:56 crc kubenswrapper[4892]: I0122 09:23:56.584844 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:23:57 crc kubenswrapper[4892]: I0122 09:23:57.017979 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd"] Jan 22 09:23:57 crc kubenswrapper[4892]: W0122 09:23:57.023406 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ff7aaf_1e3d_4a24_b873_ba2ae47fccd8.slice/crio-47ee23bab2174300acee86af6af783f5cf794238e00d02e7ca4dcc610677025e WatchSource:0}: Error finding container 47ee23bab2174300acee86af6af783f5cf794238e00d02e7ca4dcc610677025e: Status 404 returned error can't find the container with id 47ee23bab2174300acee86af6af783f5cf794238e00d02e7ca4dcc610677025e Jan 22 09:23:57 crc kubenswrapper[4892]: I0122 09:23:57.940953 4892 generic.go:334] "Generic (PLEG): container finished" podID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerID="59a9d030adac3451166fdf8517ca80a7d653aba2a3f28642ec4f82ec639ab4ff" exitCode=0 Jan 22 09:23:57 crc kubenswrapper[4892]: I0122 09:23:57.941068 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" event={"ID":"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8","Type":"ContainerDied","Data":"59a9d030adac3451166fdf8517ca80a7d653aba2a3f28642ec4f82ec639ab4ff"} Jan 22 09:23:57 crc kubenswrapper[4892]: I0122 09:23:57.941233 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" event={"ID":"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8","Type":"ContainerStarted","Data":"47ee23bab2174300acee86af6af783f5cf794238e00d02e7ca4dcc610677025e"} Jan 22 09:23:59 crc kubenswrapper[4892]: I0122 09:23:59.957476 4892 generic.go:334] "Generic (PLEG): container finished" podID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerID="e3e58e4c24304b953fc89edc556bcc3316281c162784d760ca8620e27d49c3c4" exitCode=0 Jan 22 09:23:59 crc kubenswrapper[4892]: I0122 09:23:59.957531 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" event={"ID":"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8","Type":"ContainerDied","Data":"e3e58e4c24304b953fc89edc556bcc3316281c162784d760ca8620e27d49c3c4"} Jan 22 09:24:00 crc kubenswrapper[4892]: I0122 09:24:00.966169 4892 generic.go:334] "Generic (PLEG): container finished" podID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerID="8a15268c6ec4ce8e62ead991ed8918a8edecf8113dc960d8f1cdaf3fe0c0e90d" exitCode=0 Jan 22 09:24:00 crc kubenswrapper[4892]: I0122 09:24:00.966272 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" event={"ID":"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8","Type":"ContainerDied","Data":"8a15268c6ec4ce8e62ead991ed8918a8edecf8113dc960d8f1cdaf3fe0c0e90d"} Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.274682 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.387054 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-util\") pod \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.387145 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ccgq\" (UniqueName: \"kubernetes.io/projected/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-kube-api-access-7ccgq\") pod \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.387168 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-bundle\") pod \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\" (UID: \"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8\") " Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.388155 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-bundle" (OuterVolumeSpecName: "bundle") pod "97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" (UID: "97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.396509 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-kube-api-access-7ccgq" (OuterVolumeSpecName: "kube-api-access-7ccgq") pod "97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" (UID: "97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8"). InnerVolumeSpecName "kube-api-access-7ccgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.488622 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ccgq\" (UniqueName: \"kubernetes.io/projected/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-kube-api-access-7ccgq\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.488674 4892 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.631455 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-util" (OuterVolumeSpecName: "util") pod "97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" (UID: "97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.692200 4892 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8-util\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.980131 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" event={"ID":"97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8","Type":"ContainerDied","Data":"47ee23bab2174300acee86af6af783f5cf794238e00d02e7ca4dcc610677025e"} Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.980170 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47ee23bab2174300acee86af6af783f5cf794238e00d02e7ca4dcc610677025e" Jan 22 09:24:02 crc kubenswrapper[4892]: I0122 09:24:02.980201 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.837595 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn"] Jan 22 09:24:08 crc kubenswrapper[4892]: E0122 09:24:08.838460 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="util" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.838478 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="util" Jan 22 09:24:08 crc kubenswrapper[4892]: E0122 09:24:08.838490 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="pull" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.838497 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="pull" Jan 22 09:24:08 crc kubenswrapper[4892]: E0122 09:24:08.838512 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="extract" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.838518 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="extract" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.838643 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8" containerName="extract" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.839180 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.842445 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-xz7gl" Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.870623 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn"] Jan 22 09:24:08 crc kubenswrapper[4892]: I0122 09:24:08.970548 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwczz\" (UniqueName: \"kubernetes.io/projected/bf11bbca-62bd-4421-b0be-a62f87a6d600-kube-api-access-qwczz\") pod \"openstack-operator-controller-init-698d6bb84b-sckbn\" (UID: \"bf11bbca-62bd-4421-b0be-a62f87a6d600\") " pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:09 crc kubenswrapper[4892]: I0122 09:24:09.072305 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwczz\" (UniqueName: \"kubernetes.io/projected/bf11bbca-62bd-4421-b0be-a62f87a6d600-kube-api-access-qwczz\") pod \"openstack-operator-controller-init-698d6bb84b-sckbn\" (UID: \"bf11bbca-62bd-4421-b0be-a62f87a6d600\") " pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:09 crc kubenswrapper[4892]: I0122 09:24:09.091115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwczz\" (UniqueName: \"kubernetes.io/projected/bf11bbca-62bd-4421-b0be-a62f87a6d600-kube-api-access-qwczz\") pod \"openstack-operator-controller-init-698d6bb84b-sckbn\" (UID: \"bf11bbca-62bd-4421-b0be-a62f87a6d600\") " pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:09 crc kubenswrapper[4892]: I0122 09:24:09.161126 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:09 crc kubenswrapper[4892]: I0122 09:24:09.622651 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn"] Jan 22 09:24:10 crc kubenswrapper[4892]: I0122 09:24:10.016973 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" event={"ID":"bf11bbca-62bd-4421-b0be-a62f87a6d600","Type":"ContainerStarted","Data":"c452a5b3cf0059ecdb49c77f2c229c5e4ce70589458cf07ffc2b6c68a5839d05"} Jan 22 09:24:16 crc kubenswrapper[4892]: I0122 09:24:16.323876 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:24:16 crc kubenswrapper[4892]: I0122 09:24:16.324410 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:24:16 crc kubenswrapper[4892]: I0122 09:24:16.324452 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:24:16 crc kubenswrapper[4892]: I0122 09:24:16.325043 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3241bed9938434158615102d7fd185345d457bb0f2990573e82de1469f205ee"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:24:16 crc kubenswrapper[4892]: I0122 09:24:16.325098 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://f3241bed9938434158615102d7fd185345d457bb0f2990573e82de1469f205ee" gracePeriod=600 Jan 22 09:24:17 crc kubenswrapper[4892]: I0122 09:24:17.061328 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="f3241bed9938434158615102d7fd185345d457bb0f2990573e82de1469f205ee" exitCode=0 Jan 22 09:24:17 crc kubenswrapper[4892]: I0122 09:24:17.061370 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"f3241bed9938434158615102d7fd185345d457bb0f2990573e82de1469f205ee"} Jan 22 09:24:17 crc kubenswrapper[4892]: I0122 09:24:17.061401 4892 scope.go:117] "RemoveContainer" containerID="fd44fb84f1abd6068b0406af0dfd71eaeeb9adbf12f608ae3695759f64602a98" Jan 22 09:24:19 crc kubenswrapper[4892]: I0122 09:24:19.077037 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"117e0c1b92dcf102d5c4006956ffbc6d1b9e2073ac26c26fea7a169bb0945ba2"} Jan 22 09:24:19 crc kubenswrapper[4892]: I0122 09:24:19.078246 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" event={"ID":"bf11bbca-62bd-4421-b0be-a62f87a6d600","Type":"ContainerStarted","Data":"4658d13cef41debe933b9840a5480c08a8a3c7c4e6dcc32c73fe24f807860c20"} Jan 22 09:24:19 crc kubenswrapper[4892]: I0122 09:24:19.078406 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:19 crc kubenswrapper[4892]: I0122 09:24:19.128662 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" podStartSLOduration=2.486901651 podStartE2EDuration="11.128645293s" podCreationTimestamp="2026-01-22 09:24:08 +0000 UTC" firstStartedPulling="2026-01-22 09:24:09.621235602 +0000 UTC m=+819.465314685" lastFinishedPulling="2026-01-22 09:24:18.262979254 +0000 UTC m=+828.107058327" observedRunningTime="2026-01-22 09:24:19.127635969 +0000 UTC m=+828.971715032" watchObservedRunningTime="2026-01-22 09:24:19.128645293 +0000 UTC m=+828.972724356" Jan 22 09:24:29 crc kubenswrapper[4892]: I0122 09:24:29.165835 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-698d6bb84b-sckbn" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.461151 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.462643 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.467455 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dgb4d" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.473929 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.474932 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.480258 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.481200 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.481503 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fmxkw" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.484044 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4xzsp" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.486064 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.508936 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcv6\" (UniqueName: \"kubernetes.io/projected/f7ec268a-c82e-455e-b4b9-d0f96998c015-kube-api-access-ffcv6\") pod \"barbican-operator-controller-manager-59dd8b7cbf-mcfls\" (UID: \"f7ec268a-c82e-455e-b4b9-d0f96998c015\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.509128 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.533804 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.596341 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.597270 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.604717 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jsr7x" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.610891 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfddn\" (UniqueName: \"kubernetes.io/projected/815dba39-30ed-4471-bf04-ecc573373016-kube-api-access-gfddn\") pod \"cinder-operator-controller-manager-69cf5d4557-fnrjr\" (UID: \"815dba39-30ed-4471-bf04-ecc573373016\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.610987 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs5bj\" (UniqueName: \"kubernetes.io/projected/c020c33f-f12c-47ce-9639-c0069dff8bc4-kube-api-access-xs5bj\") pod \"designate-operator-controller-manager-b45d7bf98-sx9p8\" (UID: \"c020c33f-f12c-47ce-9639-c0069dff8bc4\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.611017 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffcv6\" (UniqueName: \"kubernetes.io/projected/f7ec268a-c82e-455e-b4b9-d0f96998c015-kube-api-access-ffcv6\") pod \"barbican-operator-controller-manager-59dd8b7cbf-mcfls\" (UID: \"f7ec268a-c82e-455e-b4b9-d0f96998c015\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.620280 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.622233 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.622424 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.626059 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ss6k9" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.633515 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.645248 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.646127 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.646318 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffcv6\" (UniqueName: \"kubernetes.io/projected/f7ec268a-c82e-455e-b4b9-d0f96998c015-kube-api-access-ffcv6\") pod \"barbican-operator-controller-manager-59dd8b7cbf-mcfls\" (UID: \"f7ec268a-c82e-455e-b4b9-d0f96998c015\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.650022 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-985cl" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.668134 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.688442 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.689523 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.691195 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pqhff" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.691533 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.699111 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.709320 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.711719 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfddn\" (UniqueName: \"kubernetes.io/projected/815dba39-30ed-4471-bf04-ecc573373016-kube-api-access-gfddn\") pod \"cinder-operator-controller-manager-69cf5d4557-fnrjr\" (UID: \"815dba39-30ed-4471-bf04-ecc573373016\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.711755 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fjk\" (UniqueName: \"kubernetes.io/projected/c9a77485-9340-433e-8bf6-cd47551438a9-kube-api-access-54fjk\") pod \"horizon-operator-controller-manager-77d5c5b54f-wkmzq\" (UID: \"c9a77485-9340-433e-8bf6-cd47551438a9\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.711801 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzmr\" (UniqueName: \"kubernetes.io/projected/2047bcfa-42e4-4e81-b2c9-47f4a876ea84-kube-api-access-kgzmr\") pod \"glance-operator-controller-manager-78fdd796fd-9lqvx\" (UID: \"2047bcfa-42e4-4e81-b2c9-47f4a876ea84\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.711831 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs5bj\" (UniqueName: \"kubernetes.io/projected/c020c33f-f12c-47ce-9639-c0069dff8bc4-kube-api-access-xs5bj\") pod \"designate-operator-controller-manager-b45d7bf98-sx9p8\" (UID: \"c020c33f-f12c-47ce-9639-c0069dff8bc4\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.711869 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hvt8\" (UniqueName: \"kubernetes.io/projected/fcd15b84-585b-4984-9c1f-26a6c585ada4-kube-api-access-9hvt8\") pod \"heat-operator-controller-manager-594c8c9d5d-b9v4x\" (UID: \"fcd15b84-585b-4984-9c1f-26a6c585ada4\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.719450 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.722810 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-szsmf" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.739400 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.746140 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.752703 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.746625 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs5bj\" (UniqueName: \"kubernetes.io/projected/c020c33f-f12c-47ce-9639-c0069dff8bc4-kube-api-access-xs5bj\") pod \"designate-operator-controller-manager-b45d7bf98-sx9p8\" (UID: \"c020c33f-f12c-47ce-9639-c0069dff8bc4\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.755621 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rm69r" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.759002 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfddn\" (UniqueName: \"kubernetes.io/projected/815dba39-30ed-4471-bf04-ecc573373016-kube-api-access-gfddn\") pod \"cinder-operator-controller-manager-69cf5d4557-fnrjr\" (UID: \"815dba39-30ed-4471-bf04-ecc573373016\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.761665 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.781773 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.782546 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.785600 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-24fq5" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.790657 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.797532 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.798436 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.799992 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-f85kl" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.805688 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.810099 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.810902 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817201 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-m4nfd" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817611 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxdt\" (UniqueName: \"kubernetes.io/projected/4f507c71-c9ab-4398-b25a-b6070d41f2b7-kube-api-access-plxdt\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817680 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hvt8\" (UniqueName: \"kubernetes.io/projected/fcd15b84-585b-4984-9c1f-26a6c585ada4-kube-api-access-9hvt8\") pod \"heat-operator-controller-manager-594c8c9d5d-b9v4x\" (UID: \"fcd15b84-585b-4984-9c1f-26a6c585ada4\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817729 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817756 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fjk\" (UniqueName: \"kubernetes.io/projected/c9a77485-9340-433e-8bf6-cd47551438a9-kube-api-access-54fjk\") pod \"horizon-operator-controller-manager-77d5c5b54f-wkmzq\" (UID: \"c9a77485-9340-433e-8bf6-cd47551438a9\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817782 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvtl8\" (UniqueName: \"kubernetes.io/projected/186e1123-d674-468b-91c1-92eb6bca4a30-kube-api-access-lvtl8\") pod \"ironic-operator-controller-manager-69d6c9f5b8-dcjs4\" (UID: \"186e1123-d674-468b-91c1-92eb6bca4a30\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817819 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgzmr\" (UniqueName: \"kubernetes.io/projected/2047bcfa-42e4-4e81-b2c9-47f4a876ea84-kube-api-access-kgzmr\") pod \"glance-operator-controller-manager-78fdd796fd-9lqvx\" (UID: \"2047bcfa-42e4-4e81-b2c9-47f4a876ea84\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.817849 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjbwd\" (UniqueName: \"kubernetes.io/projected/361a2cfd-62a4-40cc-b85c-7e81e6adb91d-kube-api-access-zjbwd\") pod \"keystone-operator-controller-manager-b8b6d4659-vm28p\" (UID: \"361a2cfd-62a4-40cc-b85c-7e81e6adb91d\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.821410 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.829863 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.836083 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgzmr\" (UniqueName: \"kubernetes.io/projected/2047bcfa-42e4-4e81-b2c9-47f4a876ea84-kube-api-access-kgzmr\") pod \"glance-operator-controller-manager-78fdd796fd-9lqvx\" (UID: \"2047bcfa-42e4-4e81-b2c9-47f4a876ea84\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.836182 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fjk\" (UniqueName: \"kubernetes.io/projected/c9a77485-9340-433e-8bf6-cd47551438a9-kube-api-access-54fjk\") pod \"horizon-operator-controller-manager-77d5c5b54f-wkmzq\" (UID: \"c9a77485-9340-433e-8bf6-cd47551438a9\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.836772 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hvt8\" (UniqueName: \"kubernetes.io/projected/fcd15b84-585b-4984-9c1f-26a6c585ada4-kube-api-access-9hvt8\") pod \"heat-operator-controller-manager-594c8c9d5d-b9v4x\" (UID: \"fcd15b84-585b-4984-9c1f-26a6c585ada4\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.840064 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.840825 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.842556 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2k684" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.851794 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.852547 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.856147 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cnphg" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.866407 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.870002 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.873629 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.876520 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.883076 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.883852 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.887456 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.888324 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6kzcw" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.891521 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.893381 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.893643 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-l9xzs" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.903332 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.904371 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.905063 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.906895 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j8zqg" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.927141 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5g67\" (UniqueName: \"kubernetes.io/projected/fd035f9e-2587-4286-85d9-db7c209970de-kube-api-access-n5g67\") pod \"mariadb-operator-controller-manager-c87fff755-4ldkj\" (UID: \"fd035f9e-2587-4286-85d9-db7c209970de\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929359 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwmg\" (UniqueName: \"kubernetes.io/projected/928d4875-5da0-47ce-a68d-99fed2b7edce-kube-api-access-pfwmg\") pod \"neutron-operator-controller-manager-5d8f59fb49-dvlzw\" (UID: \"928d4875-5da0-47ce-a68d-99fed2b7edce\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929480 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929553 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvtl8\" (UniqueName: \"kubernetes.io/projected/186e1123-d674-468b-91c1-92eb6bca4a30-kube-api-access-lvtl8\") pod \"ironic-operator-controller-manager-69d6c9f5b8-dcjs4\" (UID: \"186e1123-d674-468b-91c1-92eb6bca4a30\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929698 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdmnt\" (UniqueName: \"kubernetes.io/projected/f942aff3-65c5-4507-af71-0e4596abc4cf-kube-api-access-qdmnt\") pod \"manila-operator-controller-manager-78c6999f6f-67mcr\" (UID: \"f942aff3-65c5-4507-af71-0e4596abc4cf\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929780 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjbwd\" (UniqueName: \"kubernetes.io/projected/361a2cfd-62a4-40cc-b85c-7e81e6adb91d-kube-api-access-zjbwd\") pod \"keystone-operator-controller-manager-b8b6d4659-vm28p\" (UID: \"361a2cfd-62a4-40cc-b85c-7e81e6adb91d\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929841 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5zb\" (UniqueName: \"kubernetes.io/projected/8a19ffda-db08-44ec-bc17-d70c74f9552e-kube-api-access-sc5zb\") pod \"nova-operator-controller-manager-6b8bc8d87d-pkbln\" (UID: \"8a19ffda-db08-44ec-bc17-d70c74f9552e\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929882 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pqv\" (UniqueName: \"kubernetes.io/projected/43ab3264-2c0d-44a8-ab85-66efc360bf67-kube-api-access-v2pqv\") pod \"octavia-operator-controller-manager-7bd9774b6-sjml2\" (UID: \"43ab3264-2c0d-44a8-ab85-66efc360bf67\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.929929 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxdt\" (UniqueName: \"kubernetes.io/projected/4f507c71-c9ab-4398-b25a-b6070d41f2b7-kube-api-access-plxdt\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:48 crc kubenswrapper[4892]: E0122 09:24:48.930409 4892 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:48 crc kubenswrapper[4892]: E0122 09:24:48.930473 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert podName:4f507c71-c9ab-4398-b25a-b6070d41f2b7 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:49.430456811 +0000 UTC m=+859.274535874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert") pod "infra-operator-controller-manager-54ccf4f85d-25z65" (UID: "4f507c71-c9ab-4398-b25a-b6070d41f2b7") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.934253 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.950574 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.952924 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvtl8\" (UniqueName: \"kubernetes.io/projected/186e1123-d674-468b-91c1-92eb6bca4a30-kube-api-access-lvtl8\") pod \"ironic-operator-controller-manager-69d6c9f5b8-dcjs4\" (UID: \"186e1123-d674-468b-91c1-92eb6bca4a30\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.957936 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjbwd\" (UniqueName: \"kubernetes.io/projected/361a2cfd-62a4-40cc-b85c-7e81e6adb91d-kube-api-access-zjbwd\") pod \"keystone-operator-controller-manager-b8b6d4659-vm28p\" (UID: \"361a2cfd-62a4-40cc-b85c-7e81e6adb91d\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.969599 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxdt\" (UniqueName: \"kubernetes.io/projected/4f507c71-c9ab-4398-b25a-b6070d41f2b7-kube-api-access-plxdt\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.971031 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.979688 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft"] Jan 22 09:24:48 crc kubenswrapper[4892]: I0122 09:24:48.991172 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.000112 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.001021 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.004777 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9x5hm" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.007556 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031066 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5hj\" (UniqueName: \"kubernetes.io/projected/e23d3dd6-bce9-496f-840b-0bbd3017826f-kube-api-access-wr5hj\") pod \"placement-operator-controller-manager-5d646b7d76-hf9ft\" (UID: \"e23d3dd6-bce9-496f-840b-0bbd3017826f\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031114 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdmnt\" (UniqueName: \"kubernetes.io/projected/f942aff3-65c5-4507-af71-0e4596abc4cf-kube-api-access-qdmnt\") pod \"manila-operator-controller-manager-78c6999f6f-67mcr\" (UID: \"f942aff3-65c5-4507-af71-0e4596abc4cf\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031147 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dknmn\" (UniqueName: \"kubernetes.io/projected/4ce3456e-dba6-498d-bf5a-aef2832489fe-kube-api-access-dknmn\") pod \"ovn-operator-controller-manager-55db956ddc-9htzp\" (UID: \"4ce3456e-dba6-498d-bf5a-aef2832489fe\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031180 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc5zb\" (UniqueName: \"kubernetes.io/projected/8a19ffda-db08-44ec-bc17-d70c74f9552e-kube-api-access-sc5zb\") pod \"nova-operator-controller-manager-6b8bc8d87d-pkbln\" (UID: \"8a19ffda-db08-44ec-bc17-d70c74f9552e\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031201 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pqv\" (UniqueName: \"kubernetes.io/projected/43ab3264-2c0d-44a8-ab85-66efc360bf67-kube-api-access-v2pqv\") pod \"octavia-operator-controller-manager-7bd9774b6-sjml2\" (UID: \"43ab3264-2c0d-44a8-ab85-66efc360bf67\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031227 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031248 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5g67\" (UniqueName: \"kubernetes.io/projected/fd035f9e-2587-4286-85d9-db7c209970de-kube-api-access-n5g67\") pod \"mariadb-operator-controller-manager-c87fff755-4ldkj\" (UID: \"fd035f9e-2587-4286-85d9-db7c209970de\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031266 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftjz\" (UniqueName: \"kubernetes.io/projected/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-kube-api-access-4ftjz\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.031312 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfwmg\" (UniqueName: \"kubernetes.io/projected/928d4875-5da0-47ce-a68d-99fed2b7edce-kube-api-access-pfwmg\") pod \"neutron-operator-controller-manager-5d8f59fb49-dvlzw\" (UID: \"928d4875-5da0-47ce-a68d-99fed2b7edce\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.043424 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.051421 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfwmg\" (UniqueName: \"kubernetes.io/projected/928d4875-5da0-47ce-a68d-99fed2b7edce-kube-api-access-pfwmg\") pod \"neutron-operator-controller-manager-5d8f59fb49-dvlzw\" (UID: \"928d4875-5da0-47ce-a68d-99fed2b7edce\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.051465 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdmnt\" (UniqueName: \"kubernetes.io/projected/f942aff3-65c5-4507-af71-0e4596abc4cf-kube-api-access-qdmnt\") pod \"manila-operator-controller-manager-78c6999f6f-67mcr\" (UID: \"f942aff3-65c5-4507-af71-0e4596abc4cf\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.053615 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pqv\" (UniqueName: \"kubernetes.io/projected/43ab3264-2c0d-44a8-ab85-66efc360bf67-kube-api-access-v2pqv\") pod \"octavia-operator-controller-manager-7bd9774b6-sjml2\" (UID: \"43ab3264-2c0d-44a8-ab85-66efc360bf67\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.058112 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5g67\" (UniqueName: \"kubernetes.io/projected/fd035f9e-2587-4286-85d9-db7c209970de-kube-api-access-n5g67\") pod \"mariadb-operator-controller-manager-c87fff755-4ldkj\" (UID: \"fd035f9e-2587-4286-85d9-db7c209970de\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.062919 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc5zb\" (UniqueName: \"kubernetes.io/projected/8a19ffda-db08-44ec-bc17-d70c74f9552e-kube-api-access-sc5zb\") pod \"nova-operator-controller-manager-6b8bc8d87d-pkbln\" (UID: \"8a19ffda-db08-44ec-bc17-d70c74f9552e\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.078632 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.079850 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.084180 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8k7j6" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.094044 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.098830 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.101617 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.132503 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.132560 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ftjz\" (UniqueName: \"kubernetes.io/projected/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-kube-api-access-4ftjz\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.132627 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5hj\" (UniqueName: \"kubernetes.io/projected/e23d3dd6-bce9-496f-840b-0bbd3017826f-kube-api-access-wr5hj\") pod \"placement-operator-controller-manager-5d646b7d76-hf9ft\" (UID: \"e23d3dd6-bce9-496f-840b-0bbd3017826f\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.132664 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w55wv\" (UniqueName: \"kubernetes.io/projected/f7dcb7b0-0580-4aff-8770-377761a44f88-kube-api-access-w55wv\") pod \"swift-operator-controller-manager-547cbdb99f-gfcjl\" (UID: \"f7dcb7b0-0580-4aff-8770-377761a44f88\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.132692 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dknmn\" (UniqueName: \"kubernetes.io/projected/4ce3456e-dba6-498d-bf5a-aef2832489fe-kube-api-access-dknmn\") pod \"ovn-operator-controller-manager-55db956ddc-9htzp\" (UID: \"4ce3456e-dba6-498d-bf5a-aef2832489fe\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.133585 4892 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.133635 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert podName:c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea nodeName:}" failed. No retries permitted until 2026-01-22 09:24:49.633620893 +0000 UTC m=+859.477699956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" (UID: "c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.154924 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ftjz\" (UniqueName: \"kubernetes.io/projected/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-kube-api-access-4ftjz\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.158435 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5hj\" (UniqueName: \"kubernetes.io/projected/e23d3dd6-bce9-496f-840b-0bbd3017826f-kube-api-access-wr5hj\") pod \"placement-operator-controller-manager-5d646b7d76-hf9ft\" (UID: \"e23d3dd6-bce9-496f-840b-0bbd3017826f\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.158569 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dknmn\" (UniqueName: \"kubernetes.io/projected/4ce3456e-dba6-498d-bf5a-aef2832489fe-kube-api-access-dknmn\") pod \"ovn-operator-controller-manager-55db956ddc-9htzp\" (UID: \"4ce3456e-dba6-498d-bf5a-aef2832489fe\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.173670 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.174775 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.177729 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mj48g" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.178588 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.184629 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.202088 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.210529 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.224676 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.225702 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.231409 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q5tkj" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.235476 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w55wv\" (UniqueName: \"kubernetes.io/projected/f7dcb7b0-0580-4aff-8770-377761a44f88-kube-api-access-w55wv\") pod \"swift-operator-controller-manager-547cbdb99f-gfcjl\" (UID: \"f7dcb7b0-0580-4aff-8770-377761a44f88\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.235842 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvq8\" (UniqueName: \"kubernetes.io/projected/062ff35c-ceb7-44b0-a2ef-1d79a14a444c-kube-api-access-xkvq8\") pod \"telemetry-operator-controller-manager-85cd9769bb-2n9gl\" (UID: \"062ff35c-ceb7-44b0-a2ef-1d79a14a444c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.235959 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284cf\" (UniqueName: \"kubernetes.io/projected/be68c0da-a0d9-463c-be32-6191b85ae620-kube-api-access-284cf\") pod \"test-operator-controller-manager-69797bbcbd-hj2tb\" (UID: \"be68c0da-a0d9-463c-be32-6191b85ae620\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.238729 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.240602 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.247896 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.268732 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.270017 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.274793 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.276064 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w55wv\" (UniqueName: \"kubernetes.io/projected/f7dcb7b0-0580-4aff-8770-377761a44f88-kube-api-access-w55wv\") pod \"swift-operator-controller-manager-547cbdb99f-gfcjl\" (UID: \"f7dcb7b0-0580-4aff-8770-377761a44f88\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.278938 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.278966 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gxftp" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.285166 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.285816 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.296009 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.298501 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.301478 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.303234 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wmr7n" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.307832 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.326217 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.343049 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284cf\" (UniqueName: \"kubernetes.io/projected/be68c0da-a0d9-463c-be32-6191b85ae620-kube-api-access-284cf\") pod \"test-operator-controller-manager-69797bbcbd-hj2tb\" (UID: \"be68c0da-a0d9-463c-be32-6191b85ae620\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.343144 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6tp\" (UniqueName: \"kubernetes.io/projected/b6638ff5-13e6-44b1-8711-0c775882282f-kube-api-access-nn6tp\") pod \"watcher-operator-controller-manager-5ffb9c6597-xq8jw\" (UID: \"b6638ff5-13e6-44b1-8711-0c775882282f\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.343192 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.343212 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.343255 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2h6x\" (UniqueName: \"kubernetes.io/projected/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-kube-api-access-j2h6x\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.343278 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvq8\" (UniqueName: \"kubernetes.io/projected/062ff35c-ceb7-44b0-a2ef-1d79a14a444c-kube-api-access-xkvq8\") pod \"telemetry-operator-controller-manager-85cd9769bb-2n9gl\" (UID: \"062ff35c-ceb7-44b0-a2ef-1d79a14a444c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.367062 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284cf\" (UniqueName: \"kubernetes.io/projected/be68c0da-a0d9-463c-be32-6191b85ae620-kube-api-access-284cf\") pod \"test-operator-controller-manager-69797bbcbd-hj2tb\" (UID: \"be68c0da-a0d9-463c-be32-6191b85ae620\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.391084 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvq8\" (UniqueName: \"kubernetes.io/projected/062ff35c-ceb7-44b0-a2ef-1d79a14a444c-kube-api-access-xkvq8\") pod \"telemetry-operator-controller-manager-85cd9769bb-2n9gl\" (UID: \"062ff35c-ceb7-44b0-a2ef-1d79a14a444c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.396446 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.438527 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.444199 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plbwr\" (UniqueName: \"kubernetes.io/projected/7be69e64-d272-47f2-933a-4925c0aad02c-kube-api-access-plbwr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hkmzg\" (UID: \"7be69e64-d272-47f2-933a-4925c0aad02c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.444249 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2h6x\" (UniqueName: \"kubernetes.io/projected/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-kube-api-access-j2h6x\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.444501 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6tp\" (UniqueName: \"kubernetes.io/projected/b6638ff5-13e6-44b1-8711-0c775882282f-kube-api-access-nn6tp\") pod \"watcher-operator-controller-manager-5ffb9c6597-xq8jw\" (UID: \"b6638ff5-13e6-44b1-8711-0c775882282f\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.444528 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.444553 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.444570 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.444675 4892 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.444717 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:49.944703477 +0000 UTC m=+859.788782540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.445199 4892 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.445224 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert podName:4f507c71-c9ab-4398-b25a-b6070d41f2b7 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:50.44521448 +0000 UTC m=+860.289293543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert") pod "infra-operator-controller-manager-54ccf4f85d-25z65" (UID: "4f507c71-c9ab-4398-b25a-b6070d41f2b7") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.445259 4892 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.445277 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:49.945271881 +0000 UTC m=+859.789350944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "metrics-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: W0122 09:24:49.455109 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc020c33f_f12c_47ce_9639_c0069dff8bc4.slice/crio-0c6f9f88ee92498d155ac308375cc723963628fc17eaba430fc5c1ba9b7dfdd1 WatchSource:0}: Error finding container 0c6f9f88ee92498d155ac308375cc723963628fc17eaba430fc5c1ba9b7dfdd1: Status 404 returned error can't find the container with id 0c6f9f88ee92498d155ac308375cc723963628fc17eaba430fc5c1ba9b7dfdd1 Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.464122 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2h6x\" (UniqueName: \"kubernetes.io/projected/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-kube-api-access-j2h6x\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.467612 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6tp\" (UniqueName: \"kubernetes.io/projected/b6638ff5-13e6-44b1-8711-0c775882282f-kube-api-access-nn6tp\") pod \"watcher-operator-controller-manager-5ffb9c6597-xq8jw\" (UID: \"b6638ff5-13e6-44b1-8711-0c775882282f\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.525704 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.546159 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plbwr\" (UniqueName: \"kubernetes.io/projected/7be69e64-d272-47f2-933a-4925c0aad02c-kube-api-access-plbwr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hkmzg\" (UID: \"7be69e64-d272-47f2-933a-4925c0aad02c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.567808 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plbwr\" (UniqueName: \"kubernetes.io/projected/7be69e64-d272-47f2-933a-4925c0aad02c-kube-api-access-plbwr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hkmzg\" (UID: \"7be69e64-d272-47f2-933a-4925c0aad02c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.587240 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.613712 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" event={"ID":"f7ec268a-c82e-455e-b4b9-d0f96998c015","Type":"ContainerStarted","Data":"aa4c9c5ea74e9df4d3ce125b3416cb0ce72d8d4ad56f1ff8ebec6fcdb2f78257"} Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.614948 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" event={"ID":"c020c33f-f12c-47ce-9639-c0069dff8bc4","Type":"ContainerStarted","Data":"0c6f9f88ee92498d155ac308375cc723963628fc17eaba430fc5c1ba9b7dfdd1"} Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.626604 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.647792 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.647952 4892 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.647992 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert podName:c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea nodeName:}" failed. No retries permitted until 2026-01-22 09:24:50.647980332 +0000 UTC m=+860.492059395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" (UID: "c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.653164 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.670098 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.750487 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x"] Jan 22 09:24:49 crc kubenswrapper[4892]: W0122 09:24:49.753077 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcd15b84_585b_4984_9c1f_26a6c585ada4.slice/crio-a81960eca9529a92c5fb8baf4472dc6e8cf2d25dbc7d1b9c5983abc36b6c3db8 WatchSource:0}: Error finding container a81960eca9529a92c5fb8baf4472dc6e8cf2d25dbc7d1b9c5983abc36b6c3db8: Status 404 returned error can't find the container with id a81960eca9529a92c5fb8baf4472dc6e8cf2d25dbc7d1b9c5983abc36b6c3db8 Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.954667 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.955022 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.954843 4892 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.955188 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:50.955170751 +0000 UTC m=+860.799249814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "metrics-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.955215 4892 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: E0122 09:24:49.955251 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:50.955239483 +0000 UTC m=+860.799318546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "webhook-server-cert" not found Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.972820 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj"] Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.979046 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw"] Jan 22 09:24:49 crc kubenswrapper[4892]: W0122 09:24:49.985484 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod928d4875_5da0_47ce_a68d_99fed2b7edce.slice/crio-ee9ec886cb88f3880dd637bbebdeb57f483961e603596bc1baa8cce4ac99d80e WatchSource:0}: Error finding container ee9ec886cb88f3880dd637bbebdeb57f483961e603596bc1baa8cce4ac99d80e: Status 404 returned error can't find the container with id ee9ec886cb88f3880dd637bbebdeb57f483961e603596bc1baa8cce4ac99d80e Jan 22 09:24:49 crc kubenswrapper[4892]: W0122 09:24:49.987623 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd035f9e_2587_4286_85d9_db7c209970de.slice/crio-82b2d7d9ab54bdc19a0c419743da1699bb807a815e576821b12ebd037f637f47 WatchSource:0}: Error finding container 82b2d7d9ab54bdc19a0c419743da1699bb807a815e576821b12ebd037f637f47: Status 404 returned error can't find the container with id 82b2d7d9ab54bdc19a0c419743da1699bb807a815e576821b12ebd037f637f47 Jan 22 09:24:49 crc kubenswrapper[4892]: I0122 09:24:49.997723 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq"] Jan 22 09:24:50 crc kubenswrapper[4892]: W0122 09:24:50.001492 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a77485_9340_433e_8bf6_cd47551438a9.slice/crio-5f1d378655d0e3a4925e657a620ae980be5b6693bca4591805ef01fc0344e353 WatchSource:0}: Error finding container 5f1d378655d0e3a4925e657a620ae980be5b6693bca4591805ef01fc0344e353: Status 404 returned error can't find the container with id 5f1d378655d0e3a4925e657a620ae980be5b6693bca4591805ef01fc0344e353 Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.175654 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.182793 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.192355 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.199964 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.207268 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.214871 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.222813 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl"] Jan 22 09:24:50 crc kubenswrapper[4892]: W0122 09:24:50.223818 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a19ffda_db08_44ec_bc17_d70c74f9552e.slice/crio-ef68cc98b3986fe02f19dd837f7b4b28ff3aa883604586e0ef66d26c43097f37 WatchSource:0}: Error finding container ef68cc98b3986fe02f19dd837f7b4b28ff3aa883604586e0ef66d26c43097f37: Status 404 returned error can't find the container with id ef68cc98b3986fe02f19dd837f7b4b28ff3aa883604586e0ef66d26c43097f37 Jan 22 09:24:50 crc kubenswrapper[4892]: W0122 09:24:50.229723 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186e1123_d674_468b_91c1_92eb6bca4a30.slice/crio-ed4c695447c7544ae385a6494699fe7eb2bef57f7074990f6a2d21aa3e7fc9e8 WatchSource:0}: Error finding container ed4c695447c7544ae385a6494699fe7eb2bef57f7074990f6a2d21aa3e7fc9e8: Status 404 returned error can't find the container with id ed4c695447c7544ae385a6494699fe7eb2bef57f7074990f6a2d21aa3e7fc9e8 Jan 22 09:24:50 crc kubenswrapper[4892]: W0122 09:24:50.230063 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dcb7b0_0580_4aff_8770_377761a44f88.slice/crio-256d7b089bd62370fb4f69bb26d994af9b9b433e4577de73541db50d82b09be4 WatchSource:0}: Error finding container 256d7b089bd62370fb4f69bb26d994af9b9b433e4577de73541db50d82b09be4: Status 404 returned error can't find the container with id 256d7b089bd62370fb4f69bb26d994af9b9b433e4577de73541db50d82b09be4 Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.230700 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln"] Jan 22 09:24:50 crc kubenswrapper[4892]: W0122 09:24:50.232744 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6638ff5_13e6_44b1_8711_0c775882282f.slice/crio-a94e91d3420b1cb625db3503a0494ea29a127451ac60bdc79380172ee83e1477 WatchSource:0}: Error finding container a94e91d3420b1cb625db3503a0494ea29a127451ac60bdc79380172ee83e1477: Status 404 returned error can't find the container with id a94e91d3420b1cb625db3503a0494ea29a127451ac60bdc79380172ee83e1477 Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.235432 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w55wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-gfcjl_openstack-operators(f7dcb7b0-0580-4aff-8770-377761a44f88): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.235540 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvtl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-dcjs4_openstack-operators(186e1123-d674-468b-91c1-92eb6bca4a30): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.236449 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4"] Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.237547 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc5zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-pkbln_openstack-operators(8a19ffda-db08-44ec-bc17-d70c74f9552e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.237656 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" podUID="186e1123-d674-468b-91c1-92eb6bca4a30" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.237693 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" podUID="f7dcb7b0-0580-4aff-8770-377761a44f88" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.238158 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nn6tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-xq8jw_openstack-operators(b6638ff5-13e6-44b1-8711-0c775882282f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:24:50 crc kubenswrapper[4892]: W0122 09:24:50.238582 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be69e64_d272_47f2_933a_4925c0aad02c.slice/crio-cb61d2ffa5ee2eede85f01b86f347bca71cf047b2488f9c4c9725a37e82c56cd WatchSource:0}: Error finding container cb61d2ffa5ee2eede85f01b86f347bca71cf047b2488f9c4c9725a37e82c56cd: Status 404 returned error can't find the container with id cb61d2ffa5ee2eede85f01b86f347bca71cf047b2488f9c4c9725a37e82c56cd Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.238691 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" podUID="8a19ffda-db08-44ec-bc17-d70c74f9552e" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.238814 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-284cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-hj2tb_openstack-operators(be68c0da-a0d9-463c-be32-6191b85ae620): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.239402 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" podUID="b6638ff5-13e6-44b1-8711-0c775882282f" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.240621 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" podUID="be68c0da-a0d9-463c-be32-6191b85ae620" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.242301 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr"] Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.243028 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plbwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hkmzg_openstack-operators(7be69e64-d272-47f2-933a-4925c0aad02c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.244727 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" podUID="7be69e64-d272-47f2-933a-4925c0aad02c" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.245890 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.249646 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw"] Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.461982 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.462158 4892 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.462233 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert podName:4f507c71-c9ab-4398-b25a-b6070d41f2b7 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:52.462208801 +0000 UTC m=+862.306287864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert") pod "infra-operator-controller-manager-54ccf4f85d-25z65" (UID: "4f507c71-c9ab-4398-b25a-b6070d41f2b7") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.622782 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" event={"ID":"fcd15b84-585b-4984-9c1f-26a6c585ada4","Type":"ContainerStarted","Data":"a81960eca9529a92c5fb8baf4472dc6e8cf2d25dbc7d1b9c5983abc36b6c3db8"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.624334 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" event={"ID":"b6638ff5-13e6-44b1-8711-0c775882282f","Type":"ContainerStarted","Data":"a94e91d3420b1cb625db3503a0494ea29a127451ac60bdc79380172ee83e1477"} Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.626216 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" podUID="b6638ff5-13e6-44b1-8711-0c775882282f" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.627323 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" event={"ID":"e23d3dd6-bce9-496f-840b-0bbd3017826f","Type":"ContainerStarted","Data":"551f7b6e64ef940021abd518819ab2f3dbb5a48c0c7b3cf626e3204e770bbfcc"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.629661 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" event={"ID":"928d4875-5da0-47ce-a68d-99fed2b7edce","Type":"ContainerStarted","Data":"ee9ec886cb88f3880dd637bbebdeb57f483961e603596bc1baa8cce4ac99d80e"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.633030 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" event={"ID":"815dba39-30ed-4471-bf04-ecc573373016","Type":"ContainerStarted","Data":"73c9d778197dc8e43640bacda4250b404ba4832afc5e49fbe3dde98ea22e6a28"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.648901 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" event={"ID":"8a19ffda-db08-44ec-bc17-d70c74f9552e","Type":"ContainerStarted","Data":"ef68cc98b3986fe02f19dd837f7b4b28ff3aa883604586e0ef66d26c43097f37"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.652416 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" event={"ID":"be68c0da-a0d9-463c-be32-6191b85ae620","Type":"ContainerStarted","Data":"fa3619f738ddbce8470c7d7a0d6301afa8e065b908b1df96afae4b82e871c744"} Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.653731 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" podUID="be68c0da-a0d9-463c-be32-6191b85ae620" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.653986 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" event={"ID":"fd035f9e-2587-4286-85d9-db7c209970de","Type":"ContainerStarted","Data":"82b2d7d9ab54bdc19a0c419743da1699bb807a815e576821b12ebd037f637f47"} Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.654608 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" podUID="8a19ffda-db08-44ec-bc17-d70c74f9552e" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.655168 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" event={"ID":"361a2cfd-62a4-40cc-b85c-7e81e6adb91d","Type":"ContainerStarted","Data":"f85a750f0316d1a01c4fafc9089ab8eeb613cf5040334111727d4954661b53f4"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.657473 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" event={"ID":"c9a77485-9340-433e-8bf6-cd47551438a9","Type":"ContainerStarted","Data":"5f1d378655d0e3a4925e657a620ae980be5b6693bca4591805ef01fc0344e353"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.658720 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" event={"ID":"f942aff3-65c5-4507-af71-0e4596abc4cf","Type":"ContainerStarted","Data":"f5ad2338f7b428b5c1887aadbb43d8809084b586e32817fa2f010c022360f0ae"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.660042 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" event={"ID":"f7dcb7b0-0580-4aff-8770-377761a44f88","Type":"ContainerStarted","Data":"256d7b089bd62370fb4f69bb26d994af9b9b433e4577de73541db50d82b09be4"} Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.662729 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" podUID="f7dcb7b0-0580-4aff-8770-377761a44f88" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.666195 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.666391 4892 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.666439 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert podName:c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea nodeName:}" failed. No retries permitted until 2026-01-22 09:24:52.66642332 +0000 UTC m=+862.510502383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" (UID: "c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.666795 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" event={"ID":"7be69e64-d272-47f2-933a-4925c0aad02c","Type":"ContainerStarted","Data":"cb61d2ffa5ee2eede85f01b86f347bca71cf047b2488f9c4c9725a37e82c56cd"} Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.670402 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" podUID="7be69e64-d272-47f2-933a-4925c0aad02c" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.670586 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" event={"ID":"186e1123-d674-468b-91c1-92eb6bca4a30","Type":"ContainerStarted","Data":"ed4c695447c7544ae385a6494699fe7eb2bef57f7074990f6a2d21aa3e7fc9e8"} Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.672050 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" podUID="186e1123-d674-468b-91c1-92eb6bca4a30" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.674430 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" event={"ID":"4ce3456e-dba6-498d-bf5a-aef2832489fe","Type":"ContainerStarted","Data":"e2eedbf415aa20c0fe3a7d34944cfcad1e637c95c5db2a128ec2db584f35deb9"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.676321 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" event={"ID":"2047bcfa-42e4-4e81-b2c9-47f4a876ea84","Type":"ContainerStarted","Data":"4e5ea931882c937e0f9518b595324482d43c051694a4e40131b78c98f945ce97"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.690779 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" event={"ID":"062ff35c-ceb7-44b0-a2ef-1d79a14a444c","Type":"ContainerStarted","Data":"7ffd3d4cb56aaded2e50d8067ccdecb34d2d78229bf22ca0e87f41b7a8b470f1"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.694151 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" event={"ID":"43ab3264-2c0d-44a8-ab85-66efc360bf67","Type":"ContainerStarted","Data":"cd9884e24ad5bccaf077bdd0558b1949fe8b119afe12d6093f701d92e86b6f0f"} Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.971325 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:50 crc kubenswrapper[4892]: I0122 09:24:50.971849 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.971790 4892 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.971945 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:52.971929377 +0000 UTC m=+862.816008440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "metrics-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.972128 4892 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:24:50 crc kubenswrapper[4892]: E0122 09:24:50.972223 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:52.972200624 +0000 UTC m=+862.816279767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "webhook-server-cert" not found Jan 22 09:24:51 crc kubenswrapper[4892]: E0122 09:24:51.709406 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" podUID="186e1123-d674-468b-91c1-92eb6bca4a30" Jan 22 09:24:51 crc kubenswrapper[4892]: E0122 09:24:51.709871 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" podUID="f7dcb7b0-0580-4aff-8770-377761a44f88" Jan 22 09:24:51 crc kubenswrapper[4892]: E0122 09:24:51.709971 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" podUID="8a19ffda-db08-44ec-bc17-d70c74f9552e" Jan 22 09:24:51 crc kubenswrapper[4892]: E0122 09:24:51.710031 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" podUID="b6638ff5-13e6-44b1-8711-0c775882282f" Jan 22 09:24:51 crc kubenswrapper[4892]: E0122 09:24:51.709887 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" podUID="7be69e64-d272-47f2-933a-4925c0aad02c" Jan 22 09:24:51 crc kubenswrapper[4892]: E0122 09:24:51.710091 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" podUID="be68c0da-a0d9-463c-be32-6191b85ae620" Jan 22 09:24:52 crc kubenswrapper[4892]: I0122 09:24:52.506073 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:52 crc kubenswrapper[4892]: E0122 09:24:52.506310 4892 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:52 crc kubenswrapper[4892]: E0122 09:24:52.506399 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert podName:4f507c71-c9ab-4398-b25a-b6070d41f2b7 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:56.506377573 +0000 UTC m=+866.350456636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert") pod "infra-operator-controller-manager-54ccf4f85d-25z65" (UID: "4f507c71-c9ab-4398-b25a-b6070d41f2b7") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:52 crc kubenswrapper[4892]: I0122 09:24:52.708874 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:52 crc kubenswrapper[4892]: E0122 09:24:52.709105 4892 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:52 crc kubenswrapper[4892]: E0122 09:24:52.709570 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert podName:c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea nodeName:}" failed. No retries permitted until 2026-01-22 09:24:56.709543225 +0000 UTC m=+866.553622478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" (UID: "c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:53 crc kubenswrapper[4892]: I0122 09:24:53.013031 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:53 crc kubenswrapper[4892]: I0122 09:24:53.013192 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:53 crc kubenswrapper[4892]: E0122 09:24:53.013258 4892 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:24:53 crc kubenswrapper[4892]: E0122 09:24:53.013517 4892 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:24:53 crc kubenswrapper[4892]: E0122 09:24:53.013561 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:57.013537536 +0000 UTC m=+866.857616599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "webhook-server-cert" not found Jan 22 09:24:53 crc kubenswrapper[4892]: E0122 09:24:53.013584 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:24:57.013573957 +0000 UTC m=+866.857653110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "metrics-server-cert" not found Jan 22 09:24:56 crc kubenswrapper[4892]: I0122 09:24:56.569100 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:24:56 crc kubenswrapper[4892]: E0122 09:24:56.569264 4892 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:56 crc kubenswrapper[4892]: E0122 09:24:56.569329 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert podName:4f507c71-c9ab-4398-b25a-b6070d41f2b7 nodeName:}" failed. No retries permitted until 2026-01-22 09:25:04.569313515 +0000 UTC m=+874.413392578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert") pod "infra-operator-controller-manager-54ccf4f85d-25z65" (UID: "4f507c71-c9ab-4398-b25a-b6070d41f2b7") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:24:56 crc kubenswrapper[4892]: I0122 09:24:56.776700 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:24:56 crc kubenswrapper[4892]: E0122 09:24:56.777177 4892 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:56 crc kubenswrapper[4892]: E0122 09:24:56.777221 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert podName:c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea nodeName:}" failed. No retries permitted until 2026-01-22 09:25:04.777208523 +0000 UTC m=+874.621287576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" (UID: "c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:24:57 crc kubenswrapper[4892]: I0122 09:24:57.081082 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:57 crc kubenswrapper[4892]: I0122 09:24:57.081129 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:24:57 crc kubenswrapper[4892]: E0122 09:24:57.081247 4892 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:24:57 crc kubenswrapper[4892]: E0122 09:24:57.081307 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:25:05.081292306 +0000 UTC m=+874.925371369 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "webhook-server-cert" not found Jan 22 09:24:57 crc kubenswrapper[4892]: E0122 09:24:57.081601 4892 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:24:57 crc kubenswrapper[4892]: E0122 09:24:57.081632 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:25:05.081625874 +0000 UTC m=+874.925704937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "metrics-server-cert" not found Jan 22 09:25:00 crc kubenswrapper[4892]: E0122 09:25:00.722224 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 22 09:25:00 crc kubenswrapper[4892]: E0122 09:25:00.722418 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfddn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-fnrjr_openstack-operators(815dba39-30ed-4471-bf04-ecc573373016): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:25:00 crc kubenswrapper[4892]: E0122 09:25:00.723678 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" podUID="815dba39-30ed-4471-bf04-ecc573373016" Jan 22 09:25:00 crc kubenswrapper[4892]: E0122 09:25:00.771401 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" podUID="815dba39-30ed-4471-bf04-ecc573373016" Jan 22 09:25:02 crc kubenswrapper[4892]: E0122 09:25:02.697055 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 22 09:25:02 crc kubenswrapper[4892]: E0122 09:25:02.697572 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-2n9gl_openstack-operators(062ff35c-ceb7-44b0-a2ef-1d79a14a444c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:25:02 crc kubenswrapper[4892]: E0122 09:25:02.699274 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" podUID="062ff35c-ceb7-44b0-a2ef-1d79a14a444c" Jan 22 09:25:02 crc kubenswrapper[4892]: E0122 09:25:02.786278 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" podUID="062ff35c-ceb7-44b0-a2ef-1d79a14a444c" Jan 22 09:25:04 crc kubenswrapper[4892]: I0122 09:25:04.592148 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:25:04 crc kubenswrapper[4892]: I0122 09:25:04.597996 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f507c71-c9ab-4398-b25a-b6070d41f2b7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-25z65\" (UID: \"4f507c71-c9ab-4398-b25a-b6070d41f2b7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:25:04 crc kubenswrapper[4892]: I0122 09:25:04.606997 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:25:04 crc kubenswrapper[4892]: I0122 09:25:04.794889 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:25:04 crc kubenswrapper[4892]: E0122 09:25:04.795047 4892 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:25:04 crc kubenswrapper[4892]: E0122 09:25:04.795100 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert podName:c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea nodeName:}" failed. No retries permitted until 2026-01-22 09:25:20.795083474 +0000 UTC m=+890.639162557 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" (UID: "c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:25:05 crc kubenswrapper[4892]: I0122 09:25:05.097872 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:05 crc kubenswrapper[4892]: I0122 09:25:05.098213 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:05 crc kubenswrapper[4892]: E0122 09:25:05.098121 4892 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:25:05 crc kubenswrapper[4892]: E0122 09:25:05.098507 4892 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:25:05 crc kubenswrapper[4892]: E0122 09:25:05.098514 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:25:21.09849233 +0000 UTC m=+890.942571393 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "metrics-server-cert" not found Jan 22 09:25:05 crc kubenswrapper[4892]: E0122 09:25:05.098664 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs podName:7b2bb8eb-1122-4141-a4ed-c3d316c8b821 nodeName:}" failed. No retries permitted until 2026-01-22 09:25:21.098635284 +0000 UTC m=+890.942714387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs") pod "openstack-operator-controller-manager-788c8b99b5-cws6m" (UID: "7b2bb8eb-1122-4141-a4ed-c3d316c8b821") : secret "webhook-server-cert" not found Jan 22 09:25:11 crc kubenswrapper[4892]: I0122 09:25:11.309575 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65"] Jan 22 09:25:11 crc kubenswrapper[4892]: I0122 09:25:11.869574 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" event={"ID":"4f507c71-c9ab-4398-b25a-b6070d41f2b7","Type":"ContainerStarted","Data":"fea2bf836c5cb44e5f2e27679fd332fc662a9d602cc3efca38398cc98fc99aec"} Jan 22 09:25:12 crc kubenswrapper[4892]: E0122 09:25:12.008178 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 09:25:12 crc kubenswrapper[4892]: E0122 09:25:12.008895 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjbwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-vm28p_openstack-operators(361a2cfd-62a4-40cc-b85c-7e81e6adb91d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:25:12 crc kubenswrapper[4892]: E0122 09:25:12.010130 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" podUID="361a2cfd-62a4-40cc-b85c-7e81e6adb91d" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.888431 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" event={"ID":"c9a77485-9340-433e-8bf6-cd47551438a9","Type":"ContainerStarted","Data":"635c3e36e036fbef2cec95d9970a9332b44087249e48dffd893d7ac1c1803ea5"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.889370 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.907867 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" event={"ID":"f942aff3-65c5-4507-af71-0e4596abc4cf","Type":"ContainerStarted","Data":"29cc22bc965410204ae10a224da5fe2b9ed304a955bbb91b79b840c9a279408c"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.908541 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.915813 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" event={"ID":"fcd15b84-585b-4984-9c1f-26a6c585ada4","Type":"ContainerStarted","Data":"5dc0bb61ece1c32b7e774d1c203ac87860806b99044cc63560fc1d69782cd58d"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.915936 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.926278 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" event={"ID":"2047bcfa-42e4-4e81-b2c9-47f4a876ea84","Type":"ContainerStarted","Data":"8be18fde9a558fe50139499d805191e7f04ba9ef79301e9613f35e5ae5a41714"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.926522 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.937385 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" podStartSLOduration=4.290302714 podStartE2EDuration="24.937371979s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.21948601 +0000 UTC m=+860.063565073" lastFinishedPulling="2026-01-22 09:25:10.866555275 +0000 UTC m=+880.710634338" observedRunningTime="2026-01-22 09:25:12.93086548 +0000 UTC m=+882.774944543" watchObservedRunningTime="2026-01-22 09:25:12.937371979 +0000 UTC m=+882.781451042" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.937606 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" podStartSLOduration=5.198146353 podStartE2EDuration="24.937601724s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.003844512 +0000 UTC m=+859.847923575" lastFinishedPulling="2026-01-22 09:25:09.743299863 +0000 UTC m=+879.587378946" observedRunningTime="2026-01-22 09:25:12.919078001 +0000 UTC m=+882.763157064" watchObservedRunningTime="2026-01-22 09:25:12.937601724 +0000 UTC m=+882.781680787" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.938478 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" event={"ID":"e23d3dd6-bce9-496f-840b-0bbd3017826f","Type":"ContainerStarted","Data":"67b9867c0599797f0a815f7c6a14637000054a6888dc7f2477047d44d0dd2c0a"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.938586 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.939995 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" event={"ID":"43ab3264-2c0d-44a8-ab85-66efc360bf67","Type":"ContainerStarted","Data":"751aba5578b62e47526f07bbc5e511c6a5e56e342464a41f7d2b32038a9279cf"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.940607 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.946143 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" event={"ID":"fd035f9e-2587-4286-85d9-db7c209970de","Type":"ContainerStarted","Data":"53d1fd6b3980f0fd5768a8d57036878f79ea209217d854be3051a3c449aa0b98"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.946746 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.949869 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" podStartSLOduration=3.830541042 podStartE2EDuration="24.949705721s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.755011722 +0000 UTC m=+859.599090785" lastFinishedPulling="2026-01-22 09:25:10.874176391 +0000 UTC m=+880.718255464" observedRunningTime="2026-01-22 09:25:12.944037202 +0000 UTC m=+882.788116285" watchObservedRunningTime="2026-01-22 09:25:12.949705721 +0000 UTC m=+882.793784784" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.956557 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" event={"ID":"928d4875-5da0-47ce-a68d-99fed2b7edce","Type":"ContainerStarted","Data":"28e02cceb4d2e12b4458d39d8cc999ff18ef954efc7deee4752aca58881da5f7"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.956798 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.960906 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" podStartSLOduration=4.280640118 podStartE2EDuration="24.960890984s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.186891763 +0000 UTC m=+860.030970826" lastFinishedPulling="2026-01-22 09:25:10.867142619 +0000 UTC m=+880.711221692" observedRunningTime="2026-01-22 09:25:12.956550818 +0000 UTC m=+882.800629881" watchObservedRunningTime="2026-01-22 09:25:12.960890984 +0000 UTC m=+882.804970037" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.966367 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" event={"ID":"f7ec268a-c82e-455e-b4b9-d0f96998c015","Type":"ContainerStarted","Data":"d6b8c75d3ec908ee355134c217ac2fe47ee630c1f13f66286b41aa415b5df7e5"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.966886 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.968338 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" event={"ID":"4ce3456e-dba6-498d-bf5a-aef2832489fe","Type":"ContainerStarted","Data":"863bedd8680cfdd40a6c5b3c4a285476eae38c6c3c0c53e891e79b171987b31b"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.968675 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.973950 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" event={"ID":"c020c33f-f12c-47ce-9639-c0069dff8bc4","Type":"ContainerStarted","Data":"cc289220b138cec33556a43f3f3bea08f6c9f09cd0652516858823adc74eb3c8"} Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.974085 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:25:12 crc kubenswrapper[4892]: E0122 09:25:12.976943 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" podUID="361a2cfd-62a4-40cc-b85c-7e81e6adb91d" Jan 22 09:25:12 crc kubenswrapper[4892]: I0122 09:25:12.978139 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" podStartSLOduration=3.76432329 podStartE2EDuration="24.978120346s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.652717678 +0000 UTC m=+859.496796731" lastFinishedPulling="2026-01-22 09:25:10.866514724 +0000 UTC m=+880.710593787" observedRunningTime="2026-01-22 09:25:12.977531112 +0000 UTC m=+882.821610185" watchObservedRunningTime="2026-01-22 09:25:12.978120346 +0000 UTC m=+882.822199409" Jan 22 09:25:13 crc kubenswrapper[4892]: I0122 09:25:13.004462 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" podStartSLOduration=4.364810138 podStartE2EDuration="25.00444513s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.217165344 +0000 UTC m=+860.061244417" lastFinishedPulling="2026-01-22 09:25:10.856800346 +0000 UTC m=+880.700879409" observedRunningTime="2026-01-22 09:25:13.002623336 +0000 UTC m=+882.846702399" watchObservedRunningTime="2026-01-22 09:25:13.00444513 +0000 UTC m=+882.848524193" Jan 22 09:25:13 crc kubenswrapper[4892]: I0122 09:25:13.021015 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" podStartSLOduration=3.61065317 podStartE2EDuration="25.021008126s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.456783253 +0000 UTC m=+859.300862316" lastFinishedPulling="2026-01-22 09:25:10.867138209 +0000 UTC m=+880.711217272" observedRunningTime="2026-01-22 09:25:13.019171381 +0000 UTC m=+882.863250434" watchObservedRunningTime="2026-01-22 09:25:13.021008126 +0000 UTC m=+882.865087189" Jan 22 09:25:13 crc kubenswrapper[4892]: I0122 09:25:13.033247 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" podStartSLOduration=4.393297645 podStartE2EDuration="25.033236895s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.227327482 +0000 UTC m=+860.071406545" lastFinishedPulling="2026-01-22 09:25:10.867266692 +0000 UTC m=+880.711345795" observedRunningTime="2026-01-22 09:25:13.030325744 +0000 UTC m=+882.874404807" watchObservedRunningTime="2026-01-22 09:25:13.033236895 +0000 UTC m=+882.877315958" Jan 22 09:25:13 crc kubenswrapper[4892]: I0122 09:25:13.050086 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" podStartSLOduration=4.1746230539999996 podStartE2EDuration="25.050046257s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.991253054 +0000 UTC m=+859.835332117" lastFinishedPulling="2026-01-22 09:25:10.866676257 +0000 UTC m=+880.710755320" observedRunningTime="2026-01-22 09:25:13.044487671 +0000 UTC m=+882.888566734" watchObservedRunningTime="2026-01-22 09:25:13.050046257 +0000 UTC m=+882.894125320" Jan 22 09:25:13 crc kubenswrapper[4892]: I0122 09:25:13.074657 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" podStartSLOduration=4.197590986 podStartE2EDuration="25.074640649s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.986850957 +0000 UTC m=+859.830930020" lastFinishedPulling="2026-01-22 09:25:10.86390062 +0000 UTC m=+880.707979683" observedRunningTime="2026-01-22 09:25:13.067508274 +0000 UTC m=+882.911587337" watchObservedRunningTime="2026-01-22 09:25:13.074640649 +0000 UTC m=+882.918719712" Jan 22 09:25:13 crc kubenswrapper[4892]: I0122 09:25:13.093716 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" podStartSLOduration=3.582999393 podStartE2EDuration="25.093700105s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.356446697 +0000 UTC m=+859.200525760" lastFinishedPulling="2026-01-22 09:25:10.867147409 +0000 UTC m=+880.711226472" observedRunningTime="2026-01-22 09:25:13.09188122 +0000 UTC m=+882.935960283" watchObservedRunningTime="2026-01-22 09:25:13.093700105 +0000 UTC m=+882.937779168" Jan 22 09:25:14 crc kubenswrapper[4892]: I0122 09:25:14.420144 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:25:18 crc kubenswrapper[4892]: I0122 09:25:18.793190 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-mcfls" Jan 22 09:25:18 crc kubenswrapper[4892]: I0122 09:25:18.878788 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sx9p8" Jan 22 09:25:18 crc kubenswrapper[4892]: I0122 09:25:18.937070 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9lqvx" Jan 22 09:25:18 crc kubenswrapper[4892]: I0122 09:25:18.975346 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-b9v4x" Jan 22 09:25:18 crc kubenswrapper[4892]: I0122 09:25:18.993258 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wkmzq" Jan 22 09:25:19 crc kubenswrapper[4892]: I0122 09:25:19.104860 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-67mcr" Jan 22 09:25:19 crc kubenswrapper[4892]: I0122 09:25:19.182628 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-4ldkj" Jan 22 09:25:19 crc kubenswrapper[4892]: I0122 09:25:19.187073 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-dvlzw" Jan 22 09:25:19 crc kubenswrapper[4892]: I0122 09:25:19.243892 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9htzp" Jan 22 09:25:19 crc kubenswrapper[4892]: I0122 09:25:19.250840 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-sjml2" Jan 22 09:25:19 crc kubenswrapper[4892]: I0122 09:25:19.288987 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-hf9ft" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.041795 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" event={"ID":"be68c0da-a0d9-463c-be32-6191b85ae620","Type":"ContainerStarted","Data":"a540306ebb73df24cb2e9f74424fab04a6972d2cbfe24d6734907645488dba80"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.042395 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.044656 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" event={"ID":"7be69e64-d272-47f2-933a-4925c0aad02c","Type":"ContainerStarted","Data":"a5e3a8294641cd82e6dda457b0bae3e38bc0d5f54a82ec3925d6f5d6e9dc7b95"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.046640 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" event={"ID":"186e1123-d674-468b-91c1-92eb6bca4a30","Type":"ContainerStarted","Data":"1e98cca1535c738b05302cba4782ed6cf263c8b689ed9063f137d7c0299aa2d8"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.046841 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.048665 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" event={"ID":"b6638ff5-13e6-44b1-8711-0c775882282f","Type":"ContainerStarted","Data":"8a44850c1e90519b44153da2335342b73f934e883f47bb1244ec1121a9df56eb"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.048877 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.050196 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" event={"ID":"8a19ffda-db08-44ec-bc17-d70c74f9552e","Type":"ContainerStarted","Data":"c96f0f4f35bf07edaaa851e3dafedd10bcebde8b986d24f382944cc6841a895d"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.050326 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.051771 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" event={"ID":"062ff35c-ceb7-44b0-a2ef-1d79a14a444c","Type":"ContainerStarted","Data":"9b1572f8c2e0cf304eb2aacfbb91abee6567deadfea01457f46e89ddefd1603b"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.051908 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.053096 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" event={"ID":"f7dcb7b0-0580-4aff-8770-377761a44f88","Type":"ContainerStarted","Data":"4b7362637a9cab90fd0a36c506a21ecaeeb75c1c612919eb185105410c936c1f"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.053253 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.056472 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" event={"ID":"4f507c71-c9ab-4398-b25a-b6070d41f2b7","Type":"ContainerStarted","Data":"54aeac99c54e05f44c4d211e2ae51df8053d622f169dd3b4294b9fccf2e62ec5"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.056581 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.058056 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" podStartSLOduration=2.696176556 podStartE2EDuration="32.05804579s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.238560067 +0000 UTC m=+860.082639130" lastFinishedPulling="2026-01-22 09:25:19.600429311 +0000 UTC m=+889.444508364" observedRunningTime="2026-01-22 09:25:20.054489483 +0000 UTC m=+889.898568546" watchObservedRunningTime="2026-01-22 09:25:20.05804579 +0000 UTC m=+889.902124853" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.058679 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" event={"ID":"815dba39-30ed-4471-bf04-ecc573373016","Type":"ContainerStarted","Data":"15c6fbf8dfb613ffc37a2cda230b36610a8ea3131b5e9f444db2fec99d341c7e"} Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.058880 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.071617 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" podStartSLOduration=2.649531986 podStartE2EDuration="32.071597092s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.235101913 +0000 UTC m=+860.079180976" lastFinishedPulling="2026-01-22 09:25:19.657167029 +0000 UTC m=+889.501246082" observedRunningTime="2026-01-22 09:25:20.067647225 +0000 UTC m=+889.911726288" watchObservedRunningTime="2026-01-22 09:25:20.071597092 +0000 UTC m=+889.915676155" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.099085 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" podStartSLOduration=1.736648867 podStartE2EDuration="31.099068104s" podCreationTimestamp="2026-01-22 09:24:49 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.237984303 +0000 UTC m=+860.082063366" lastFinishedPulling="2026-01-22 09:25:19.60040353 +0000 UTC m=+889.444482603" observedRunningTime="2026-01-22 09:25:20.097327231 +0000 UTC m=+889.941406294" watchObservedRunningTime="2026-01-22 09:25:20.099068104 +0000 UTC m=+889.943147167" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.121883 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" podStartSLOduration=2.761248869 podStartE2EDuration="32.121864832s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.235487102 +0000 UTC m=+860.079566165" lastFinishedPulling="2026-01-22 09:25:19.596103065 +0000 UTC m=+889.440182128" observedRunningTime="2026-01-22 09:25:20.119053543 +0000 UTC m=+889.963132606" watchObservedRunningTime="2026-01-22 09:25:20.121864832 +0000 UTC m=+889.965943895" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.141015 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" podStartSLOduration=2.702216484 podStartE2EDuration="32.14099597s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.213946725 +0000 UTC m=+860.058025788" lastFinishedPulling="2026-01-22 09:25:19.652726211 +0000 UTC m=+889.496805274" observedRunningTime="2026-01-22 09:25:20.137513995 +0000 UTC m=+889.981593058" watchObservedRunningTime="2026-01-22 09:25:20.14099597 +0000 UTC m=+889.985075033" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.155578 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hkmzg" podStartSLOduration=1.7286291409999999 podStartE2EDuration="31.155561537s" podCreationTimestamp="2026-01-22 09:24:49 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.242879873 +0000 UTC m=+860.086958936" lastFinishedPulling="2026-01-22 09:25:19.669812269 +0000 UTC m=+889.513891332" observedRunningTime="2026-01-22 09:25:20.153374853 +0000 UTC m=+889.997453906" watchObservedRunningTime="2026-01-22 09:25:20.155561537 +0000 UTC m=+889.999640600" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.182532 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" podStartSLOduration=2.763123255 podStartE2EDuration="32.182511886s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.23743209 +0000 UTC m=+860.081511153" lastFinishedPulling="2026-01-22 09:25:19.656820721 +0000 UTC m=+889.500899784" observedRunningTime="2026-01-22 09:25:20.18143997 +0000 UTC m=+890.025519043" watchObservedRunningTime="2026-01-22 09:25:20.182511886 +0000 UTC m=+890.026590949" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.208124 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" podStartSLOduration=23.925461921 podStartE2EDuration="32.208105983s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:25:11.323534749 +0000 UTC m=+881.167613812" lastFinishedPulling="2026-01-22 09:25:19.606178811 +0000 UTC m=+889.450257874" observedRunningTime="2026-01-22 09:25:20.204866604 +0000 UTC m=+890.048945667" watchObservedRunningTime="2026-01-22 09:25:20.208105983 +0000 UTC m=+890.052185046" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.252572 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" podStartSLOduration=2.310111118 podStartE2EDuration="32.252554941s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:49.696823848 +0000 UTC m=+859.540902911" lastFinishedPulling="2026-01-22 09:25:19.639267651 +0000 UTC m=+889.483346734" observedRunningTime="2026-01-22 09:25:20.250255465 +0000 UTC m=+890.094334538" watchObservedRunningTime="2026-01-22 09:25:20.252554941 +0000 UTC m=+890.096634004" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.854478 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:25:20 crc kubenswrapper[4892]: I0122 09:25:20.868153 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr\" (UID: \"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.064416 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.163245 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.163305 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.168079 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-metrics-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.169879 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b2bb8eb-1122-4141-a4ed-c3d316c8b821-webhook-certs\") pod \"openstack-operator-controller-manager-788c8b99b5-cws6m\" (UID: \"7b2bb8eb-1122-4141-a4ed-c3d316c8b821\") " pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.365638 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr"] Jan 22 09:25:21 crc kubenswrapper[4892]: W0122 09:25:21.374634 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5d90f05_33b1_4b25_84b8_fc2a6e2c0cea.slice/crio-d2dab9e77bf0661339fe84095f86b417e4b696f6f2f683dbdb21595680e136de WatchSource:0}: Error finding container d2dab9e77bf0661339fe84095f86b417e4b696f6f2f683dbdb21595680e136de: Status 404 returned error can't find the container with id d2dab9e77bf0661339fe84095f86b417e4b696f6f2f683dbdb21595680e136de Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.435836 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:21 crc kubenswrapper[4892]: I0122 09:25:21.920738 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m"] Jan 22 09:25:21 crc kubenswrapper[4892]: W0122 09:25:21.924451 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b2bb8eb_1122_4141_a4ed_c3d316c8b821.slice/crio-cc08d95877befc809b0e494a3e1a39d8ea15b7de758c8abc55c16595c9eb8b47 WatchSource:0}: Error finding container cc08d95877befc809b0e494a3e1a39d8ea15b7de758c8abc55c16595c9eb8b47: Status 404 returned error can't find the container with id cc08d95877befc809b0e494a3e1a39d8ea15b7de758c8abc55c16595c9eb8b47 Jan 22 09:25:22 crc kubenswrapper[4892]: I0122 09:25:22.089381 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" event={"ID":"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea","Type":"ContainerStarted","Data":"d2dab9e77bf0661339fe84095f86b417e4b696f6f2f683dbdb21595680e136de"} Jan 22 09:25:22 crc kubenswrapper[4892]: I0122 09:25:22.091641 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" event={"ID":"7b2bb8eb-1122-4141-a4ed-c3d316c8b821","Type":"ContainerStarted","Data":"cc08d95877befc809b0e494a3e1a39d8ea15b7de758c8abc55c16595c9eb8b47"} Jan 22 09:25:23 crc kubenswrapper[4892]: I0122 09:25:23.100233 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" event={"ID":"7b2bb8eb-1122-4141-a4ed-c3d316c8b821","Type":"ContainerStarted","Data":"84a56d4b2dfe5e06472dbbbcdfdb85cd815ed941e81d9e42adaf7ef5e71e349f"} Jan 22 09:25:23 crc kubenswrapper[4892]: I0122 09:25:23.100603 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:23 crc kubenswrapper[4892]: I0122 09:25:23.139682 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" podStartSLOduration=34.139661245 podStartE2EDuration="34.139661245s" podCreationTimestamp="2026-01-22 09:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:25:23.135018791 +0000 UTC m=+892.979097864" watchObservedRunningTime="2026-01-22 09:25:23.139661245 +0000 UTC m=+892.983740308" Jan 22 09:25:24 crc kubenswrapper[4892]: I0122 09:25:24.110216 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" event={"ID":"c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea","Type":"ContainerStarted","Data":"962de52e295c7fdb624ce99c993f63dab8dd3a994ab85d85602a8df702270b32"} Jan 22 09:25:24 crc kubenswrapper[4892]: I0122 09:25:24.110275 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:25:24 crc kubenswrapper[4892]: I0122 09:25:24.161438 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" podStartSLOduration=34.55405124 podStartE2EDuration="36.161417282s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:25:21.376628793 +0000 UTC m=+891.220707856" lastFinishedPulling="2026-01-22 09:25:22.983994835 +0000 UTC m=+892.828073898" observedRunningTime="2026-01-22 09:25:24.142121009 +0000 UTC m=+893.986200092" watchObservedRunningTime="2026-01-22 09:25:24.161417282 +0000 UTC m=+894.005496365" Jan 22 09:25:24 crc kubenswrapper[4892]: I0122 09:25:24.612977 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-25z65" Jan 22 09:25:28 crc kubenswrapper[4892]: I0122 09:25:28.874819 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-fnrjr" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.046738 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-dcjs4" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.145330 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" event={"ID":"361a2cfd-62a4-40cc-b85c-7e81e6adb91d","Type":"ContainerStarted","Data":"b7b8899a7ff19d9201f9c7f43b04cd0d670f5c729846d6b66a166ccfbdf2e4b0"} Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.145509 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.159888 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" podStartSLOduration=2.534366258 podStartE2EDuration="41.159872512s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.215979955 +0000 UTC m=+860.060059008" lastFinishedPulling="2026-01-22 09:25:28.841486189 +0000 UTC m=+898.685565262" observedRunningTime="2026-01-22 09:25:29.159122704 +0000 UTC m=+899.003201787" watchObservedRunningTime="2026-01-22 09:25:29.159872512 +0000 UTC m=+899.003951575" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.214129 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-pkbln" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.330097 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gfcjl" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.398172 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-2n9gl" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.527608 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2tb" Jan 22 09:25:29 crc kubenswrapper[4892]: I0122 09:25:29.590041 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-xq8jw" Jan 22 09:25:31 crc kubenswrapper[4892]: I0122 09:25:31.074138 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr" Jan 22 09:25:31 crc kubenswrapper[4892]: I0122 09:25:31.440771 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-788c8b99b5-cws6m" Jan 22 09:25:39 crc kubenswrapper[4892]: I0122 09:25:39.097530 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vm28p" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.340391 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xt9sx"] Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.342699 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.345612 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9wgnm" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.345829 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.345948 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.351360 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.353953 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xt9sx"] Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.370166 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63d16f2-8679-4ddc-abd5-934392e7648c-config\") pod \"dnsmasq-dns-84bb9d8bd9-xt9sx\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.370542 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2jbd\" (UniqueName: \"kubernetes.io/projected/f63d16f2-8679-4ddc-abd5-934392e7648c-kube-api-access-r2jbd\") pod \"dnsmasq-dns-84bb9d8bd9-xt9sx\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.407447 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jzjl9"] Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.408421 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.417384 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.427129 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jzjl9"] Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.471555 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vdj\" (UniqueName: \"kubernetes.io/projected/26741c27-fb70-4941-b10a-027c69c63e47-kube-api-access-z9vdj\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.471624 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63d16f2-8679-4ddc-abd5-934392e7648c-config\") pod \"dnsmasq-dns-84bb9d8bd9-xt9sx\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.471711 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2jbd\" (UniqueName: \"kubernetes.io/projected/f63d16f2-8679-4ddc-abd5-934392e7648c-kube-api-access-r2jbd\") pod \"dnsmasq-dns-84bb9d8bd9-xt9sx\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.471742 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-config\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.471791 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-dns-svc\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.472670 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63d16f2-8679-4ddc-abd5-934392e7648c-config\") pod \"dnsmasq-dns-84bb9d8bd9-xt9sx\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.495229 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2jbd\" (UniqueName: \"kubernetes.io/projected/f63d16f2-8679-4ddc-abd5-934392e7648c-kube-api-access-r2jbd\") pod \"dnsmasq-dns-84bb9d8bd9-xt9sx\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.572314 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-dns-svc\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.572375 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vdj\" (UniqueName: \"kubernetes.io/projected/26741c27-fb70-4941-b10a-027c69c63e47-kube-api-access-z9vdj\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.572425 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-config\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.573166 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-config\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.573246 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-dns-svc\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.589245 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vdj\" (UniqueName: \"kubernetes.io/projected/26741c27-fb70-4941-b10a-027c69c63e47-kube-api-access-z9vdj\") pod \"dnsmasq-dns-5f854695bc-jzjl9\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.671115 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.733462 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:25:56 crc kubenswrapper[4892]: I0122 09:25:56.940369 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xt9sx"] Jan 22 09:25:57 crc kubenswrapper[4892]: W0122 09:25:57.016039 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26741c27_fb70_4941_b10a_027c69c63e47.slice/crio-1fa4236965e726ecdd9ba9976af84b985621f52133eac93477aaf2c318696a0b WatchSource:0}: Error finding container 1fa4236965e726ecdd9ba9976af84b985621f52133eac93477aaf2c318696a0b: Status 404 returned error can't find the container with id 1fa4236965e726ecdd9ba9976af84b985621f52133eac93477aaf2c318696a0b Jan 22 09:25:57 crc kubenswrapper[4892]: I0122 09:25:57.018957 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jzjl9"] Jan 22 09:25:57 crc kubenswrapper[4892]: I0122 09:25:57.362258 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" event={"ID":"f63d16f2-8679-4ddc-abd5-934392e7648c","Type":"ContainerStarted","Data":"43f0dd37b20c292eff4c17c8988f9dcde1d498119875eee0cc3845c570cbcb55"} Jan 22 09:25:57 crc kubenswrapper[4892]: I0122 09:25:57.363755 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" event={"ID":"26741c27-fb70-4941-b10a-027c69c63e47","Type":"ContainerStarted","Data":"1fa4236965e726ecdd9ba9976af84b985621f52133eac93477aaf2c318696a0b"} Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.362868 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jzjl9"] Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.403059 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-6dzhd"] Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.404141 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.416621 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-6dzhd"] Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.514720 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-config\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.514829 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7vz\" (UniqueName: \"kubernetes.io/projected/5a9eef26-0c1b-48de-9351-d70e43429f38-kube-api-access-kd7vz\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.514908 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.616235 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7vz\" (UniqueName: \"kubernetes.io/projected/5a9eef26-0c1b-48de-9351-d70e43429f38-kube-api-access-kd7vz\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.616333 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.616383 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-config\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.617353 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.617406 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-config\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.643223 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7vz\" (UniqueName: \"kubernetes.io/projected/5a9eef26-0c1b-48de-9351-d70e43429f38-kube-api-access-kd7vz\") pod \"dnsmasq-dns-744ffd65bc-6dzhd\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.707415 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xt9sx"] Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.720897 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.729099 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kbmtj"] Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.730509 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.751506 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kbmtj"] Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.820191 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrlk\" (UniqueName: \"kubernetes.io/projected/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-kube-api-access-nhrlk\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.820523 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-config\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.820579 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.922257 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrlk\" (UniqueName: \"kubernetes.io/projected/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-kube-api-access-nhrlk\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.922314 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-config\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.922361 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.923566 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-config\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.923717 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:25:59 crc kubenswrapper[4892]: I0122 09:25:59.978224 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrlk\" (UniqueName: \"kubernetes.io/projected/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-kube-api-access-nhrlk\") pod \"dnsmasq-dns-95f5f6995-kbmtj\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:26:00 crc kubenswrapper[4892]: I0122 09:26:00.105518 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:26:00 crc kubenswrapper[4892]: I0122 09:26:00.145853 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-6dzhd"] Jan 22 09:26:00 crc kubenswrapper[4892]: W0122 09:26:00.153928 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a9eef26_0c1b_48de_9351_d70e43429f38.slice/crio-9ecb92bbb39300bca26966c8122393de5e1025e6d796667ba15bb3e09d610e8f WatchSource:0}: Error finding container 9ecb92bbb39300bca26966c8122393de5e1025e6d796667ba15bb3e09d610e8f: Status 404 returned error can't find the container with id 9ecb92bbb39300bca26966c8122393de5e1025e6d796667ba15bb3e09d610e8f Jan 22 09:26:00 crc kubenswrapper[4892]: I0122 09:26:00.318279 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kbmtj"] Jan 22 09:26:00 crc kubenswrapper[4892]: I0122 09:26:00.382381 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" event={"ID":"5a9eef26-0c1b-48de-9351-d70e43429f38","Type":"ContainerStarted","Data":"9ecb92bbb39300bca26966c8122393de5e1025e6d796667ba15bb3e09d610e8f"} Jan 22 09:26:00 crc kubenswrapper[4892]: I0122 09:26:00.383237 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" event={"ID":"6b18d04c-fcc1-4d58-b000-344b8e0b71d0","Type":"ContainerStarted","Data":"324a81b34ef8a607347a1834ce5996f5c0491ed8021a5dd6ba7c4aabf5be9b08"} Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.294856 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.296319 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.298966 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.299196 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.300639 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.301791 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.304542 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.304714 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.304845 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.305235 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.305445 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.305554 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.305655 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.305849 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5s4nt" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.305998 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.306105 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4n6tb" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.306199 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.312453 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.316962 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.329239 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.348979 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349054 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349080 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349101 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3106222-75cd-4011-a7d0-33a3d39e3f0c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349130 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k24b5\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-kube-api-access-k24b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349156 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349219 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349324 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349363 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349397 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349421 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349444 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349462 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349475 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349494 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349520 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349541 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349559 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349574 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349590 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3106222-75cd-4011-a7d0-33a3d39e3f0c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349604 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwqm\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-kube-api-access-bkwqm\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.349624 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451116 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k24b5\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-kube-api-access-k24b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451194 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451220 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451295 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451320 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451354 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451374 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451395 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451410 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451426 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451441 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451457 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451484 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451501 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451517 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3106222-75cd-4011-a7d0-33a3d39e3f0c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451531 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451558 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkwqm\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-kube-api-access-bkwqm\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451574 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451619 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451648 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451663 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451681 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3106222-75cd-4011-a7d0-33a3d39e3f0c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451675 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.451855 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.452038 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.452157 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.452598 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.452776 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.453261 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.453601 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.454006 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.454725 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.455085 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.455231 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.457237 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3106222-75cd-4011-a7d0-33a3d39e3f0c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.457794 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.458433 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.459871 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3106222-75cd-4011-a7d0-33a3d39e3f0c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.460852 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.464810 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.469902 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.469986 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.475855 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k24b5\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-kube-api-access-k24b5\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.476041 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkwqm\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-kube-api-access-bkwqm\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.511112 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.575396 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.632579 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:26:01 crc kubenswrapper[4892]: I0122 09:26:01.646879 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:01.999768 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.001482 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.004077 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.004270 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.004562 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.004590 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jh4kl" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.006561 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.011009 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.154740 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165520 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165626 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165672 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa34c3fd-3e21-49ac-becd-283928666ff2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165720 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa34c3fd-3e21-49ac-becd-283928666ff2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165770 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165920 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.165962 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrr8c\" (UniqueName: \"kubernetes.io/projected/aa34c3fd-3e21-49ac-becd-283928666ff2-kube-api-access-nrr8c\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.166064 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa34c3fd-3e21-49ac-becd-283928666ff2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.231461 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:26:02 crc kubenswrapper[4892]: W0122 09:26:02.252619 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef2d11ba_fdb4_4ade_af1b_59dae1b1d10f.slice/crio-98ccf255e72579a1af647cf1ff20f0e9fd7f261d6c4679aaa0865186cae5adf2 WatchSource:0}: Error finding container 98ccf255e72579a1af647cf1ff20f0e9fd7f261d6c4679aaa0865186cae5adf2: Status 404 returned error can't find the container with id 98ccf255e72579a1af647cf1ff20f0e9fd7f261d6c4679aaa0865186cae5adf2 Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272228 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrr8c\" (UniqueName: \"kubernetes.io/projected/aa34c3fd-3e21-49ac-becd-283928666ff2-kube-api-access-nrr8c\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272272 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa34c3fd-3e21-49ac-becd-283928666ff2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272336 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272370 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272391 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa34c3fd-3e21-49ac-becd-283928666ff2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272420 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa34c3fd-3e21-49ac-becd-283928666ff2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272455 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272711 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa34c3fd-3e21-49ac-becd-283928666ff2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.272783 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.273256 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.273809 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.276543 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa34c3fd-3e21-49ac-becd-283928666ff2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.284844 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa34c3fd-3e21-49ac-becd-283928666ff2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.300381 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrr8c\" (UniqueName: \"kubernetes.io/projected/aa34c3fd-3e21-49ac-becd-283928666ff2-kube-api-access-nrr8c\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.301119 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa34c3fd-3e21-49ac-becd-283928666ff2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.302783 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"aa34c3fd-3e21-49ac-becd-283928666ff2\") " pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.339345 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.413116 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3106222-75cd-4011-a7d0-33a3d39e3f0c","Type":"ContainerStarted","Data":"10967b58bf59e20d3268bafd761c8a3de605d38f2c4a2a8710091816b4380699"} Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.416682 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f","Type":"ContainerStarted","Data":"98ccf255e72579a1af647cf1ff20f0e9fd7f261d6c4679aaa0865186cae5adf2"} Jan 22 09:26:02 crc kubenswrapper[4892]: I0122 09:26:02.823392 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 09:26:02 crc kubenswrapper[4892]: W0122 09:26:02.876493 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa34c3fd_3e21_49ac_becd_283928666ff2.slice/crio-ccfdfcd6117d698015db6cf201a048d08e77591300d75eb3567ebf1eef716611 WatchSource:0}: Error finding container ccfdfcd6117d698015db6cf201a048d08e77591300d75eb3567ebf1eef716611: Status 404 returned error can't find the container with id ccfdfcd6117d698015db6cf201a048d08e77591300d75eb3567ebf1eef716611 Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.298642 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.305128 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.307779 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.308240 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.313238 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.313710 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rlpgn" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.314485 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.398407 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.398705 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.398810 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.398881 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.398897 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.399042 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv82v\" (UniqueName: \"kubernetes.io/projected/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-kube-api-access-tv82v\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.399092 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.399122 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.483364 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa34c3fd-3e21-49ac-becd-283928666ff2","Type":"ContainerStarted","Data":"ccfdfcd6117d698015db6cf201a048d08e77591300d75eb3567ebf1eef716611"} Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500242 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500302 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500354 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500412 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500453 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500478 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500494 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.500835 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.501139 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv82v\" (UniqueName: \"kubernetes.io/projected/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-kube-api-access-tv82v\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.501393 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.501559 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.501865 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.502662 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.507429 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.524873 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.531543 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.539210 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv82v\" (UniqueName: \"kubernetes.io/projected/5dcc844d-f681-4c5c-acb5-0edc57e32a0f-kube-api-access-tv82v\") pod \"openstack-cell1-galera-0\" (UID: \"5dcc844d-f681-4c5c-acb5-0edc57e32a0f\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.573999 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.574961 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.582223 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.582790 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.583198 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dqbt4" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.586214 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.640309 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.706157 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1581ad3-031b-451b-a8a7-bea327cf4ecd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.706231 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1581ad3-031b-451b-a8a7-bea327cf4ecd-config-data\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.706380 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1581ad3-031b-451b-a8a7-bea327cf4ecd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.706510 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1581ad3-031b-451b-a8a7-bea327cf4ecd-kolla-config\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.706566 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwqp\" (UniqueName: \"kubernetes.io/projected/d1581ad3-031b-451b-a8a7-bea327cf4ecd-kube-api-access-twwqp\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.810410 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1581ad3-031b-451b-a8a7-bea327cf4ecd-config-data\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.810481 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1581ad3-031b-451b-a8a7-bea327cf4ecd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.810522 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1581ad3-031b-451b-a8a7-bea327cf4ecd-kolla-config\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.810552 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twwqp\" (UniqueName: \"kubernetes.io/projected/d1581ad3-031b-451b-a8a7-bea327cf4ecd-kube-api-access-twwqp\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.810572 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1581ad3-031b-451b-a8a7-bea327cf4ecd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.814728 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1581ad3-031b-451b-a8a7-bea327cf4ecd-kolla-config\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.818527 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1581ad3-031b-451b-a8a7-bea327cf4ecd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.818576 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1581ad3-031b-451b-a8a7-bea327cf4ecd-config-data\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.820068 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1581ad3-031b-451b-a8a7-bea327cf4ecd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.838879 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twwqp\" (UniqueName: \"kubernetes.io/projected/d1581ad3-031b-451b-a8a7-bea327cf4ecd-kube-api-access-twwqp\") pod \"memcached-0\" (UID: \"d1581ad3-031b-451b-a8a7-bea327cf4ecd\") " pod="openstack/memcached-0" Jan 22 09:26:03 crc kubenswrapper[4892]: I0122 09:26:03.905255 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.160334 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.426371 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dsksf"] Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.427960 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.447943 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsksf"] Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.482650 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 09:26:04 crc kubenswrapper[4892]: W0122 09:26:04.489840 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1581ad3_031b_451b_a8a7_bea327cf4ecd.slice/crio-b21a2984873bc4d8fb7e903caa4caa14c280dd8ecb0777f736b936601101f772 WatchSource:0}: Error finding container b21a2984873bc4d8fb7e903caa4caa14c280dd8ecb0777f736b936601101f772: Status 404 returned error can't find the container with id b21a2984873bc4d8fb7e903caa4caa14c280dd8ecb0777f736b936601101f772 Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.497721 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5dcc844d-f681-4c5c-acb5-0edc57e32a0f","Type":"ContainerStarted","Data":"990c6279099fcb986a4ad254f7153ece1cb0788b1856c166686185a3bbaeb91d"} Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.521244 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgqb\" (UniqueName: \"kubernetes.io/projected/1e032087-c02e-4c24-a634-4f202dc6921b-kube-api-access-4vgqb\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.521437 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-utilities\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.521476 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-catalog-content\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.622990 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgqb\" (UniqueName: \"kubernetes.io/projected/1e032087-c02e-4c24-a634-4f202dc6921b-kube-api-access-4vgqb\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.623146 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-utilities\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.623182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-catalog-content\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.623835 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-catalog-content\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.626187 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-utilities\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.647059 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgqb\" (UniqueName: \"kubernetes.io/projected/1e032087-c02e-4c24-a634-4f202dc6921b-kube-api-access-4vgqb\") pod \"certified-operators-dsksf\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:04 crc kubenswrapper[4892]: I0122 09:26:04.767689 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:05 crc kubenswrapper[4892]: I0122 09:26:05.515750 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d1581ad3-031b-451b-a8a7-bea327cf4ecd","Type":"ContainerStarted","Data":"b21a2984873bc4d8fb7e903caa4caa14c280dd8ecb0777f736b936601101f772"} Jan 22 09:26:05 crc kubenswrapper[4892]: I0122 09:26:05.956320 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:26:05 crc kubenswrapper[4892]: I0122 09:26:05.957162 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:26:05 crc kubenswrapper[4892]: I0122 09:26:05.959513 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bz7km" Jan 22 09:26:05 crc kubenswrapper[4892]: I0122 09:26:05.964184 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:26:06 crc kubenswrapper[4892]: I0122 09:26:06.048626 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxvz\" (UniqueName: \"kubernetes.io/projected/192369ce-10a4-47fb-9813-94de83265f37-kube-api-access-6jxvz\") pod \"kube-state-metrics-0\" (UID: \"192369ce-10a4-47fb-9813-94de83265f37\") " pod="openstack/kube-state-metrics-0" Jan 22 09:26:06 crc kubenswrapper[4892]: I0122 09:26:06.150123 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxvz\" (UniqueName: \"kubernetes.io/projected/192369ce-10a4-47fb-9813-94de83265f37-kube-api-access-6jxvz\") pod \"kube-state-metrics-0\" (UID: \"192369ce-10a4-47fb-9813-94de83265f37\") " pod="openstack/kube-state-metrics-0" Jan 22 09:26:06 crc kubenswrapper[4892]: I0122 09:26:06.171689 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxvz\" (UniqueName: \"kubernetes.io/projected/192369ce-10a4-47fb-9813-94de83265f37-kube-api-access-6jxvz\") pod \"kube-state-metrics-0\" (UID: \"192369ce-10a4-47fb-9813-94de83265f37\") " pod="openstack/kube-state-metrics-0" Jan 22 09:26:06 crc kubenswrapper[4892]: I0122 09:26:06.287904 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.067048 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f8rll"] Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.068402 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.070770 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sbrsd" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.070976 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.072510 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.091555 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f8rll"] Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.124203 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-snr9q"] Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.125881 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.141664 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-snr9q"] Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.201644 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-lib\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.201698 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-log-ovn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.201767 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-run-ovn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.201871 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb6665-4bf4-4558-abf7-788627c34a1c-combined-ca-bundle\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.201969 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-run\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.201999 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-log\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202017 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlgj\" (UniqueName: \"kubernetes.io/projected/2d7ca514-734a-4ab1-890f-b04a1549c073-kube-api-access-4mlgj\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202043 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl6dn\" (UniqueName: \"kubernetes.io/projected/91fb6665-4bf4-4558-abf7-788627c34a1c-kube-api-access-dl6dn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202093 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fb6665-4bf4-4558-abf7-788627c34a1c-scripts\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202121 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-etc-ovs\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202142 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb6665-4bf4-4558-abf7-788627c34a1c-ovn-controller-tls-certs\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202184 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d7ca514-734a-4ab1-890f-b04a1549c073-scripts\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.202203 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-run\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304639 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fb6665-4bf4-4558-abf7-788627c34a1c-scripts\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304710 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-etc-ovs\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304732 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb6665-4bf4-4558-abf7-788627c34a1c-ovn-controller-tls-certs\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304763 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d7ca514-734a-4ab1-890f-b04a1549c073-scripts\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304780 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-run\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304798 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-lib\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304819 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-log-ovn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304837 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-run-ovn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.304861 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb6665-4bf4-4558-abf7-788627c34a1c-combined-ca-bundle\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.305062 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-run\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.305085 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-log\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.305100 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlgj\" (UniqueName: \"kubernetes.io/projected/2d7ca514-734a-4ab1-890f-b04a1549c073-kube-api-access-4mlgj\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.305125 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl6dn\" (UniqueName: \"kubernetes.io/projected/91fb6665-4bf4-4558-abf7-788627c34a1c-kube-api-access-dl6dn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.307456 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fb6665-4bf4-4558-abf7-788627c34a1c-scripts\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.307825 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-etc-ovs\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.308619 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-run\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.308692 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-run-ovn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.309256 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-log\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.309319 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-lib\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.309416 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d7ca514-734a-4ab1-890f-b04a1549c073-var-run\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.309442 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91fb6665-4bf4-4558-abf7-788627c34a1c-var-log-ovn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.310773 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d7ca514-734a-4ab1-890f-b04a1549c073-scripts\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.323659 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl6dn\" (UniqueName: \"kubernetes.io/projected/91fb6665-4bf4-4558-abf7-788627c34a1c-kube-api-access-dl6dn\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.325566 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb6665-4bf4-4558-abf7-788627c34a1c-ovn-controller-tls-certs\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.326328 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb6665-4bf4-4558-abf7-788627c34a1c-combined-ca-bundle\") pod \"ovn-controller-f8rll\" (UID: \"91fb6665-4bf4-4558-abf7-788627c34a1c\") " pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.327679 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlgj\" (UniqueName: \"kubernetes.io/projected/2d7ca514-734a-4ab1-890f-b04a1549c073-kube-api-access-4mlgj\") pod \"ovn-controller-ovs-snr9q\" (UID: \"2d7ca514-734a-4ab1-890f-b04a1549c073\") " pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.398548 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f8rll" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.440578 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.595538 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.597110 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.599862 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.600156 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.600447 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.601724 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9gtk7" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.601989 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.609865 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717215 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74af166d-c2f0-43b1-a516-e1d393e873b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717299 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74af166d-c2f0-43b1-a516-e1d393e873b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717331 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717346 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717579 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717695 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74af166d-c2f0-43b1-a516-e1d393e873b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717739 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.717781 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92n95\" (UniqueName: \"kubernetes.io/projected/74af166d-c2f0-43b1-a516-e1d393e873b4-kube-api-access-92n95\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822333 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822605 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74af166d-c2f0-43b1-a516-e1d393e873b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822665 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822687 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92n95\" (UniqueName: \"kubernetes.io/projected/74af166d-c2f0-43b1-a516-e1d393e873b4-kube-api-access-92n95\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822728 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74af166d-c2f0-43b1-a516-e1d393e873b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822772 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74af166d-c2f0-43b1-a516-e1d393e873b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822803 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.822820 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.823535 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.823541 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74af166d-c2f0-43b1-a516-e1d393e873b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.823897 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74af166d-c2f0-43b1-a516-e1d393e873b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.823941 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74af166d-c2f0-43b1-a516-e1d393e873b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.827495 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.828596 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.838600 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92n95\" (UniqueName: \"kubernetes.io/projected/74af166d-c2f0-43b1-a516-e1d393e873b4-kube-api-access-92n95\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.845244 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74af166d-c2f0-43b1-a516-e1d393e873b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.849583 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74af166d-c2f0-43b1-a516-e1d393e873b4\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:09 crc kubenswrapper[4892]: I0122 09:26:09.923952 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.807243 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.810377 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.813156 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.813473 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wdqbd" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.813851 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.813971 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.824794 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.869465 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55tz7"] Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.871063 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872341 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872427 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872508 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3e46d9-e9ca-453a-92a3-a07471597296-config\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872563 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3e46d9-e9ca-453a-92a3-a07471597296-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872598 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872798 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872869 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed3e46d9-e9ca-453a-92a3-a07471597296-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.872920 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zklnw\" (UniqueName: \"kubernetes.io/projected/ed3e46d9-e9ca-453a-92a3-a07471597296-kube-api-access-zklnw\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.889628 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55tz7"] Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974259 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974325 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974363 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlf5\" (UniqueName: \"kubernetes.io/projected/0eca1e45-30a1-46b6-9683-fd48782a4aea-kube-api-access-jvlf5\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974396 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3e46d9-e9ca-453a-92a3-a07471597296-config\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974420 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3e46d9-e9ca-453a-92a3-a07471597296-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974437 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-catalog-content\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974455 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974696 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-utilities\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974771 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974848 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed3e46d9-e9ca-453a-92a3-a07471597296-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.974927 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zklnw\" (UniqueName: \"kubernetes.io/projected/ed3e46d9-e9ca-453a-92a3-a07471597296-kube-api-access-zklnw\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.975452 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.975609 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed3e46d9-e9ca-453a-92a3-a07471597296-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.976350 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3e46d9-e9ca-453a-92a3-a07471597296-config\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.976398 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3e46d9-e9ca-453a-92a3-a07471597296-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.984143 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.984729 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.986023 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3e46d9-e9ca-453a-92a3-a07471597296-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:12 crc kubenswrapper[4892]: I0122 09:26:12.990527 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zklnw\" (UniqueName: \"kubernetes.io/projected/ed3e46d9-e9ca-453a-92a3-a07471597296-kube-api-access-zklnw\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.009514 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ed3e46d9-e9ca-453a-92a3-a07471597296\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.076203 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-utilities\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.076599 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvlf5\" (UniqueName: \"kubernetes.io/projected/0eca1e45-30a1-46b6-9683-fd48782a4aea-kube-api-access-jvlf5\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.076705 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-catalog-content\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.076784 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-utilities\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.077176 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-catalog-content\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.093407 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvlf5\" (UniqueName: \"kubernetes.io/projected/0eca1e45-30a1-46b6-9683-fd48782a4aea-kube-api-access-jvlf5\") pod \"redhat-operators-55tz7\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.130453 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:13 crc kubenswrapper[4892]: I0122 09:26:13.209719 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:18 crc kubenswrapper[4892]: I0122 09:26:18.883093 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x9dnj"] Jan 22 09:26:18 crc kubenswrapper[4892]: I0122 09:26:18.885943 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:18 crc kubenswrapper[4892]: I0122 09:26:18.920186 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9dnj"] Jan 22 09:26:18 crc kubenswrapper[4892]: I0122 09:26:18.984711 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnql6\" (UniqueName: \"kubernetes.io/projected/ea8e907a-3cb4-4368-8c4e-c312f537af37-kube-api-access-dnql6\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:18 crc kubenswrapper[4892]: I0122 09:26:18.984762 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-catalog-content\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:18 crc kubenswrapper[4892]: I0122 09:26:18.984967 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-utilities\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.086140 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-utilities\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.086233 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnql6\" (UniqueName: \"kubernetes.io/projected/ea8e907a-3cb4-4368-8c4e-c312f537af37-kube-api-access-dnql6\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.086259 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-catalog-content\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.086959 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-utilities\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.087000 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-catalog-content\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.118571 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnql6\" (UniqueName: \"kubernetes.io/projected/ea8e907a-3cb4-4368-8c4e-c312f537af37-kube-api-access-dnql6\") pod \"community-operators-x9dnj\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:19 crc kubenswrapper[4892]: I0122 09:26:19.211334 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.654032 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.654978 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkwqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(c3106222-75cd-4011-a7d0-33a3d39e3f0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.656180 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.663715 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.663879 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k24b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.665120 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.698091 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" Jan 22 09:26:27 crc kubenswrapper[4892]: E0122 09:26:27.698092 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.445914 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.446445 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tv82v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(5dcc844d-f681-4c5c-acb5-0edc57e32a0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.447522 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.447580 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="5dcc844d-f681-4c5c-acb5-0edc57e32a0f" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.447625 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrr8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(aa34c3fd-3e21-49ac-becd-283928666ff2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.449703 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="aa34c3fd-3e21-49ac-becd-283928666ff2" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.713981 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="5dcc844d-f681-4c5c-acb5-0edc57e32a0f" Jan 22 09:26:29 crc kubenswrapper[4892]: E0122 09:26:29.716512 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="aa34c3fd-3e21-49ac-becd-283928666ff2" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.349113 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.349249 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9vdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-jzjl9_openstack(26741c27-fb70-4941-b10a-027c69c63e47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.350457 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" podUID="26741c27-fb70-4941-b10a-027c69c63e47" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.402475 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.402665 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2jbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-xt9sx_openstack(f63d16f2-8679-4ddc-abd5-934392e7648c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.404040 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" podUID="f63d16f2-8679-4ddc-abd5-934392e7648c" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.412272 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.412460 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhrlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-kbmtj_openstack(6b18d04c-fcc1-4d58-b000-344b8e0b71d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.413646 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" podUID="6b18d04c-fcc1-4d58-b000-344b8e0b71d0" Jan 22 09:26:30 crc kubenswrapper[4892]: E0122 09:26:30.714850 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" podUID="6b18d04c-fcc1-4d58-b000-344b8e0b71d0" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.214217 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.214901 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n568h69h89h65h5b8h58dh5c7hb7hfh548h55h667h69h6h66bh84h544h5cch7hd4h5b5h5d6h595h58bh5fbh655h54ch676h6dhd9h9ch645q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twwqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d1581ad3-031b-451b-a8a7-bea327cf4ecd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.216249 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d1581ad3-031b-451b-a8a7-bea327cf4ecd" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.251314 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.251678 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd7vz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-6dzhd_openstack(5a9eef26-0c1b-48de-9351-d70e43429f38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.253054 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" podUID="5a9eef26-0c1b-48de-9351-d70e43429f38" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.366553 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.371513 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.501130 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63d16f2-8679-4ddc-abd5-934392e7648c-config\") pod \"f63d16f2-8679-4ddc-abd5-934392e7648c\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.501550 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2jbd\" (UniqueName: \"kubernetes.io/projected/f63d16f2-8679-4ddc-abd5-934392e7648c-kube-api-access-r2jbd\") pod \"f63d16f2-8679-4ddc-abd5-934392e7648c\" (UID: \"f63d16f2-8679-4ddc-abd5-934392e7648c\") " Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.501659 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f63d16f2-8679-4ddc-abd5-934392e7648c-config" (OuterVolumeSpecName: "config") pod "f63d16f2-8679-4ddc-abd5-934392e7648c" (UID: "f63d16f2-8679-4ddc-abd5-934392e7648c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.502141 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vdj\" (UniqueName: \"kubernetes.io/projected/26741c27-fb70-4941-b10a-027c69c63e47-kube-api-access-z9vdj\") pod \"26741c27-fb70-4941-b10a-027c69c63e47\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.502212 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-dns-svc\") pod \"26741c27-fb70-4941-b10a-027c69c63e47\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.502348 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-config\") pod \"26741c27-fb70-4941-b10a-027c69c63e47\" (UID: \"26741c27-fb70-4941-b10a-027c69c63e47\") " Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.502692 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26741c27-fb70-4941-b10a-027c69c63e47" (UID: "26741c27-fb70-4941-b10a-027c69c63e47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.502845 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.502879 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63d16f2-8679-4ddc-abd5-934392e7648c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.503058 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-config" (OuterVolumeSpecName: "config") pod "26741c27-fb70-4941-b10a-027c69c63e47" (UID: "26741c27-fb70-4941-b10a-027c69c63e47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.507128 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26741c27-fb70-4941-b10a-027c69c63e47-kube-api-access-z9vdj" (OuterVolumeSpecName: "kube-api-access-z9vdj") pod "26741c27-fb70-4941-b10a-027c69c63e47" (UID: "26741c27-fb70-4941-b10a-027c69c63e47"). InnerVolumeSpecName "kube-api-access-z9vdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.507823 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63d16f2-8679-4ddc-abd5-934392e7648c-kube-api-access-r2jbd" (OuterVolumeSpecName: "kube-api-access-r2jbd") pod "f63d16f2-8679-4ddc-abd5-934392e7648c" (UID: "f63d16f2-8679-4ddc-abd5-934392e7648c"). InnerVolumeSpecName "kube-api-access-r2jbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.604749 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26741c27-fb70-4941-b10a-027c69c63e47-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.604787 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2jbd\" (UniqueName: \"kubernetes.io/projected/f63d16f2-8679-4ddc-abd5-934392e7648c-kube-api-access-r2jbd\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.604800 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9vdj\" (UniqueName: \"kubernetes.io/projected/26741c27-fb70-4941-b10a-027c69c63e47-kube-api-access-z9vdj\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.723834 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" event={"ID":"f63d16f2-8679-4ddc-abd5-934392e7648c","Type":"ContainerDied","Data":"43f0dd37b20c292eff4c17c8988f9dcde1d498119875eee0cc3845c570cbcb55"} Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.724190 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-xt9sx" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.727243 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.730336 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-jzjl9" event={"ID":"26741c27-fb70-4941-b10a-027c69c63e47","Type":"ContainerDied","Data":"1fa4236965e726ecdd9ba9976af84b985621f52133eac93477aaf2c318696a0b"} Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.730889 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" podUID="5a9eef26-0c1b-48de-9351-d70e43429f38" Jan 22 09:26:31 crc kubenswrapper[4892]: E0122 09:26:31.736052 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="d1581ad3-031b-451b-a8a7-bea327cf4ecd" Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.753774 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f8rll"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.819804 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xt9sx"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.837062 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-xt9sx"] Jan 22 09:26:31 crc kubenswrapper[4892]: W0122 09:26:31.857495 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3e46d9_e9ca_453a_92a3_a07471597296.slice/crio-5a02a3a06615b82f3c97c3d01b679975c8cba19894b0cdfc48beef9691c7df57 WatchSource:0}: Error finding container 5a02a3a06615b82f3c97c3d01b679975c8cba19894b0cdfc48beef9691c7df57: Status 404 returned error can't find the container with id 5a02a3a06615b82f3c97c3d01b679975c8cba19894b0cdfc48beef9691c7df57 Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.858158 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jzjl9"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.888179 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jzjl9"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.900861 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.943615 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsksf"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.962832 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55tz7"] Jan 22 09:26:31 crc kubenswrapper[4892]: I0122 09:26:31.994498 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.002854 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9dnj"] Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.082212 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-snr9q"] Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.222581 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 09:26:32 crc kubenswrapper[4892]: W0122 09:26:32.327596 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74af166d_c2f0_43b1_a516_e1d393e873b4.slice/crio-e9c02b50a8e7db42e9e4afb4920e6c9d9f3e7ae9a944dc538df17b55ab701e7d WatchSource:0}: Error finding container e9c02b50a8e7db42e9e4afb4920e6c9d9f3e7ae9a944dc538df17b55ab701e7d: Status 404 returned error can't find the container with id e9c02b50a8e7db42e9e4afb4920e6c9d9f3e7ae9a944dc538df17b55ab701e7d Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.763545 4892 generic.go:334] "Generic (PLEG): container finished" podID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerID="2c4dc26e9c7fad02cb5c75fbeed3bb0bc7465259520bba6eeb6c4b1d66d41e3c" exitCode=0 Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.763604 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerDied","Data":"2c4dc26e9c7fad02cb5c75fbeed3bb0bc7465259520bba6eeb6c4b1d66d41e3c"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.763629 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerStarted","Data":"3066ab9cf4137c322203774183c708bf68c2337c7cb75d4234fd227cbbeb11a0"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.764376 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f8rll" event={"ID":"91fb6665-4bf4-4558-abf7-788627c34a1c","Type":"ContainerStarted","Data":"4023401e44a3acb87bd337fa722e16e7e6fc394f1f4dc82ed37e011c7f873eba"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.765211 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"192369ce-10a4-47fb-9813-94de83265f37","Type":"ContainerStarted","Data":"4ca67edbbbc50aa3f2c735ee9c250b60a2105b24e3e59873b1335b0b3a6f7620"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.766271 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74af166d-c2f0-43b1-a516-e1d393e873b4","Type":"ContainerStarted","Data":"e9c02b50a8e7db42e9e4afb4920e6c9d9f3e7ae9a944dc538df17b55ab701e7d"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.767510 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ed3e46d9-e9ca-453a-92a3-a07471597296","Type":"ContainerStarted","Data":"5a02a3a06615b82f3c97c3d01b679975c8cba19894b0cdfc48beef9691c7df57"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.768696 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snr9q" event={"ID":"2d7ca514-734a-4ab1-890f-b04a1549c073","Type":"ContainerStarted","Data":"ef51deb9a79c24e58a8ff5ed48978925ecc1aec62d06007dfc852dd8365e5298"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.770548 4892 generic.go:334] "Generic (PLEG): container finished" podID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerID="c7e40fc9e301d236ab5b5f1b1f94855de540b5e435a4c60b814875c7116a968a" exitCode=0 Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.770605 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerDied","Data":"c7e40fc9e301d236ab5b5f1b1f94855de540b5e435a4c60b814875c7116a968a"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.770626 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerStarted","Data":"8b32f9cc2c82c533ee79eb23cc330b5c0440acb3d62498043634c0c2214cb9b3"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.773637 4892 generic.go:334] "Generic (PLEG): container finished" podID="1e032087-c02e-4c24-a634-4f202dc6921b" containerID="2694520b0f94a82ec6056b2dc04949ec5944d6743a4823335620803314210461" exitCode=0 Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.773672 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsksf" event={"ID":"1e032087-c02e-4c24-a634-4f202dc6921b","Type":"ContainerDied","Data":"2694520b0f94a82ec6056b2dc04949ec5944d6743a4823335620803314210461"} Jan 22 09:26:32 crc kubenswrapper[4892]: I0122 09:26:32.773693 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsksf" event={"ID":"1e032087-c02e-4c24-a634-4f202dc6921b","Type":"ContainerStarted","Data":"2ff495a25b81b6df687959f63781838ae2c2634bc145972149699da718be3d58"} Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.165066 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jh6lw"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.166753 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.178953 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.209676 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-6dzhd"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.216065 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jh6lw"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.232046 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.233371 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.235545 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.237814 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.263794 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92fqc\" (UniqueName: \"kubernetes.io/projected/9b67e5e8-090f-4674-9f29-f2586f095fd5-kube-api-access-92fqc\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.263856 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622lc\" (UniqueName: \"kubernetes.io/projected/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-kube-api-access-622lc\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.263881 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-config\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.263902 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-ovn-rundir\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.263941 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-combined-ca-bundle\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.264013 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.264072 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-config\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.264092 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.264122 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.264142 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-ovs-rundir\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.313079 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kbmtj"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365061 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365104 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-ovs-rundir\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365144 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92fqc\" (UniqueName: \"kubernetes.io/projected/9b67e5e8-090f-4674-9f29-f2586f095fd5-kube-api-access-92fqc\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365167 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622lc\" (UniqueName: \"kubernetes.io/projected/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-kube-api-access-622lc\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-config\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365197 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-ovn-rundir\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365225 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-combined-ca-bundle\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365252 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365323 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-config\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.365342 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.366144 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-ovsdbserver-nb\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.372126 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-ovs-rundir\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.372319 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-ovn-rundir\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.373500 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-config\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.374118 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-dns-svc\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.375875 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-config\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.384465 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-z4lg2"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.385761 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.388691 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.392632 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-combined-ca-bundle\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.404364 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.404467 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622lc\" (UniqueName: \"kubernetes.io/projected/c16202bf-0e93-4bb8-96fb-cf6537ea21e6-kube-api-access-622lc\") pod \"ovn-controller-metrics-jh6lw\" (UID: \"c16202bf-0e93-4bb8-96fb-cf6537ea21e6\") " pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.406156 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92fqc\" (UniqueName: \"kubernetes.io/projected/9b67e5e8-090f-4674-9f29-f2586f095fd5-kube-api-access-92fqc\") pod \"dnsmasq-dns-7bbdc7ccd7-9gqb9\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.470359 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26741c27-fb70-4941-b10a-027c69c63e47" path="/var/lib/kubelet/pods/26741c27-fb70-4941-b10a-027c69c63e47/volumes" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.471375 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63d16f2-8679-4ddc-abd5-934392e7648c" path="/var/lib/kubelet/pods/f63d16f2-8679-4ddc-abd5-934392e7648c/volumes" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.471748 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-z4lg2"] Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.510099 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jh6lw" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.565351 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.571764 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjg5\" (UniqueName: \"kubernetes.io/projected/d4ebb0c2-1567-423e-ac41-69ffffbe396e-kube-api-access-bfjg5\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.571842 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.571860 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.571975 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.572023 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-config\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.673468 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-config\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.673531 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjg5\" (UniqueName: \"kubernetes.io/projected/d4ebb0c2-1567-423e-ac41-69ffffbe396e-kube-api-access-bfjg5\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.673564 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.673581 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.673642 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.674436 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.674458 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-config\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.674784 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.674982 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.693202 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfjg5\" (UniqueName: \"kubernetes.io/projected/d4ebb0c2-1567-423e-ac41-69ffffbe396e-kube-api-access-bfjg5\") pod \"dnsmasq-dns-757dc6fff9-z4lg2\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:33 crc kubenswrapper[4892]: I0122 09:26:33.787396 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.332780 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.409640 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-dns-svc\") pod \"5a9eef26-0c1b-48de-9351-d70e43429f38\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.409741 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-config\") pod \"5a9eef26-0c1b-48de-9351-d70e43429f38\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.409768 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7vz\" (UniqueName: \"kubernetes.io/projected/5a9eef26-0c1b-48de-9351-d70e43429f38-kube-api-access-kd7vz\") pod \"5a9eef26-0c1b-48de-9351-d70e43429f38\" (UID: \"5a9eef26-0c1b-48de-9351-d70e43429f38\") " Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.410238 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a9eef26-0c1b-48de-9351-d70e43429f38" (UID: "5a9eef26-0c1b-48de-9351-d70e43429f38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.410387 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-config" (OuterVolumeSpecName: "config") pod "5a9eef26-0c1b-48de-9351-d70e43429f38" (UID: "5a9eef26-0c1b-48de-9351-d70e43429f38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.416130 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9eef26-0c1b-48de-9351-d70e43429f38-kube-api-access-kd7vz" (OuterVolumeSpecName: "kube-api-access-kd7vz") pod "5a9eef26-0c1b-48de-9351-d70e43429f38" (UID: "5a9eef26-0c1b-48de-9351-d70e43429f38"). InnerVolumeSpecName "kube-api-access-kd7vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.511084 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.511111 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9eef26-0c1b-48de-9351-d70e43429f38-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.511121 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7vz\" (UniqueName: \"kubernetes.io/projected/5a9eef26-0c1b-48de-9351-d70e43429f38-kube-api-access-kd7vz\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.582908 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.622583 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-config\") pod \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.622676 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-dns-svc\") pod \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.622717 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhrlk\" (UniqueName: \"kubernetes.io/projected/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-kube-api-access-nhrlk\") pod \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\" (UID: \"6b18d04c-fcc1-4d58-b000-344b8e0b71d0\") " Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.623151 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-config" (OuterVolumeSpecName: "config") pod "6b18d04c-fcc1-4d58-b000-344b8e0b71d0" (UID: "6b18d04c-fcc1-4d58-b000-344b8e0b71d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.623826 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b18d04c-fcc1-4d58-b000-344b8e0b71d0" (UID: "6b18d04c-fcc1-4d58-b000-344b8e0b71d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.626644 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-kube-api-access-nhrlk" (OuterVolumeSpecName: "kube-api-access-nhrlk") pod "6b18d04c-fcc1-4d58-b000-344b8e0b71d0" (UID: "6b18d04c-fcc1-4d58-b000-344b8e0b71d0"). InnerVolumeSpecName "kube-api-access-nhrlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.724352 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.724566 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.724578 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhrlk\" (UniqueName: \"kubernetes.io/projected/6b18d04c-fcc1-4d58-b000-344b8e0b71d0-kube-api-access-nhrlk\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.804118 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" event={"ID":"6b18d04c-fcc1-4d58-b000-344b8e0b71d0","Type":"ContainerDied","Data":"324a81b34ef8a607347a1834ce5996f5c0491ed8021a5dd6ba7c4aabf5be9b08"} Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.804363 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-kbmtj" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.806314 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" event={"ID":"5a9eef26-0c1b-48de-9351-d70e43429f38","Type":"ContainerDied","Data":"9ecb92bbb39300bca26966c8122393de5e1025e6d796667ba15bb3e09d610e8f"} Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.806360 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-6dzhd" Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.843769 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-6dzhd"] Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.853303 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-6dzhd"] Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.887274 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kbmtj"] Jan 22 09:26:35 crc kubenswrapper[4892]: I0122 09:26:35.893268 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-kbmtj"] Jan 22 09:26:36 crc kubenswrapper[4892]: I0122 09:26:36.123000 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-z4lg2"] Jan 22 09:26:36 crc kubenswrapper[4892]: W0122 09:26:36.172422 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4ebb0c2_1567_423e_ac41_69ffffbe396e.slice/crio-fac7fd19e27bbaad05487f60f4a6df07f7883d3b17e02bfcddc0bc1185d40a36 WatchSource:0}: Error finding container fac7fd19e27bbaad05487f60f4a6df07f7883d3b17e02bfcddc0bc1185d40a36: Status 404 returned error can't find the container with id fac7fd19e27bbaad05487f60f4a6df07f7883d3b17e02bfcddc0bc1185d40a36 Jan 22 09:26:36 crc kubenswrapper[4892]: I0122 09:26:36.582698 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jh6lw"] Jan 22 09:26:36 crc kubenswrapper[4892]: I0122 09:26:36.680132 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9"] Jan 22 09:26:36 crc kubenswrapper[4892]: W0122 09:26:36.715133 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc16202bf_0e93_4bb8_96fb_cf6537ea21e6.slice/crio-38c5224547672fb7b4217516ca4ebe48f82a0e0c51df5c8058132d3018819f9f WatchSource:0}: Error finding container 38c5224547672fb7b4217516ca4ebe48f82a0e0c51df5c8058132d3018819f9f: Status 404 returned error can't find the container with id 38c5224547672fb7b4217516ca4ebe48f82a0e0c51df5c8058132d3018819f9f Jan 22 09:26:36 crc kubenswrapper[4892]: I0122 09:26:36.815024 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jh6lw" event={"ID":"c16202bf-0e93-4bb8-96fb-cf6537ea21e6","Type":"ContainerStarted","Data":"38c5224547672fb7b4217516ca4ebe48f82a0e0c51df5c8058132d3018819f9f"} Jan 22 09:26:36 crc kubenswrapper[4892]: I0122 09:26:36.817044 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" event={"ID":"d4ebb0c2-1567-423e-ac41-69ffffbe396e","Type":"ContainerStarted","Data":"fac7fd19e27bbaad05487f60f4a6df07f7883d3b17e02bfcddc0bc1185d40a36"} Jan 22 09:26:36 crc kubenswrapper[4892]: I0122 09:26:36.819028 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" event={"ID":"9b67e5e8-090f-4674-9f29-f2586f095fd5","Type":"ContainerStarted","Data":"8ca536a5093c19f12537b096ed2bcf237c11af3e677b79d4b176796fa6d55c09"} Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.428006 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9eef26-0c1b-48de-9351-d70e43429f38" path="/var/lib/kubelet/pods/5a9eef26-0c1b-48de-9351-d70e43429f38/volumes" Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.428730 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b18d04c-fcc1-4d58-b000-344b8e0b71d0" path="/var/lib/kubelet/pods/6b18d04c-fcc1-4d58-b000-344b8e0b71d0/volumes" Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.832633 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74af166d-c2f0-43b1-a516-e1d393e873b4","Type":"ContainerStarted","Data":"bf0575a4168110589c3f4f876f826cf5229090b778d9c7ff62a967211fcbce28"} Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.837170 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ed3e46d9-e9ca-453a-92a3-a07471597296","Type":"ContainerStarted","Data":"ac66ed13b1bd46fa324ebca33bfb598de2cc9f10c779a955be779ce4c7473d60"} Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.840140 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snr9q" event={"ID":"2d7ca514-734a-4ab1-890f-b04a1549c073","Type":"ContainerStarted","Data":"8895257e6c9bbde3fd6c8db3c08e7ec9ab29343f12f56733869b9abc3cd38b4f"} Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.843660 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerStarted","Data":"b86f6bc449c6857c65f934a4d61fe8390d8e8d10bacc96b6dea3a0cd4515b0e0"} Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.846239 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerStarted","Data":"ef5a7161854f55e5f7dc2908baf3317327e7198ed58f21f4c29d7fac16f8e9b8"} Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.848142 4892 generic.go:334] "Generic (PLEG): container finished" podID="1e032087-c02e-4c24-a634-4f202dc6921b" containerID="0c9dc88d48c4e57d605232a03b5b6b6b43be710e3bd9ee7e4f054d6d3224280d" exitCode=0 Jan 22 09:26:37 crc kubenswrapper[4892]: I0122 09:26:37.848168 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsksf" event={"ID":"1e032087-c02e-4c24-a634-4f202dc6921b","Type":"ContainerDied","Data":"0c9dc88d48c4e57d605232a03b5b6b6b43be710e3bd9ee7e4f054d6d3224280d"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.856323 4892 generic.go:334] "Generic (PLEG): container finished" podID="2d7ca514-734a-4ab1-890f-b04a1549c073" containerID="8895257e6c9bbde3fd6c8db3c08e7ec9ab29343f12f56733869b9abc3cd38b4f" exitCode=0 Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.856416 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snr9q" event={"ID":"2d7ca514-734a-4ab1-890f-b04a1549c073","Type":"ContainerDied","Data":"8895257e6c9bbde3fd6c8db3c08e7ec9ab29343f12f56733869b9abc3cd38b4f"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.858451 4892 generic.go:334] "Generic (PLEG): container finished" podID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerID="bb17adfc46d7c899719c97a7a025b60d06208e347ab0746909c3fd6d5f9aae3b" exitCode=0 Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.858497 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" event={"ID":"9b67e5e8-090f-4674-9f29-f2586f095fd5","Type":"ContainerDied","Data":"bb17adfc46d7c899719c97a7a025b60d06208e347ab0746909c3fd6d5f9aae3b"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.861099 4892 generic.go:334] "Generic (PLEG): container finished" podID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerID="b86f6bc449c6857c65f934a4d61fe8390d8e8d10bacc96b6dea3a0cd4515b0e0" exitCode=0 Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.861341 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerDied","Data":"b86f6bc449c6857c65f934a4d61fe8390d8e8d10bacc96b6dea3a0cd4515b0e0"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.864798 4892 generic.go:334] "Generic (PLEG): container finished" podID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerID="ef5a7161854f55e5f7dc2908baf3317327e7198ed58f21f4c29d7fac16f8e9b8" exitCode=0 Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.864870 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerDied","Data":"ef5a7161854f55e5f7dc2908baf3317327e7198ed58f21f4c29d7fac16f8e9b8"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.888398 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsksf" event={"ID":"1e032087-c02e-4c24-a634-4f202dc6921b","Type":"ContainerStarted","Data":"7ab0c122ca045b4851d2361c00985396f6a69250f33605485d75150d6de26677"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.904822 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f8rll" event={"ID":"91fb6665-4bf4-4558-abf7-788627c34a1c","Type":"ContainerStarted","Data":"5cbff517256a643da03519a87cf30d77bb58373eb87f6bfc269901b599850eff"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.905719 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-f8rll" Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.917069 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"192369ce-10a4-47fb-9813-94de83265f37","Type":"ContainerStarted","Data":"91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02"} Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.917361 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.933412 4892 generic.go:334] "Generic (PLEG): container finished" podID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerID="2ffa7b251a3198211c0ae18c7b25bf7dfeab8aea1928c22611be30cce719f2bf" exitCode=0 Jan 22 09:26:38 crc kubenswrapper[4892]: I0122 09:26:38.934465 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" event={"ID":"d4ebb0c2-1567-423e-ac41-69ffffbe396e","Type":"ContainerDied","Data":"2ffa7b251a3198211c0ae18c7b25bf7dfeab8aea1928c22611be30cce719f2bf"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.044332 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dsksf" podStartSLOduration=29.590071731 podStartE2EDuration="35.044310816s" podCreationTimestamp="2026-01-22 09:26:04 +0000 UTC" firstStartedPulling="2026-01-22 09:26:32.774832939 +0000 UTC m=+962.618912002" lastFinishedPulling="2026-01-22 09:26:38.229072024 +0000 UTC m=+968.073151087" observedRunningTime="2026-01-22 09:26:39.015020489 +0000 UTC m=+968.859099552" watchObservedRunningTime="2026-01-22 09:26:39.044310816 +0000 UTC m=+968.888389869" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.057416 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.464549168 podStartE2EDuration="34.057397047s" podCreationTimestamp="2026-01-22 09:26:05 +0000 UTC" firstStartedPulling="2026-01-22 09:26:31.984915764 +0000 UTC m=+961.828994827" lastFinishedPulling="2026-01-22 09:26:37.577763603 +0000 UTC m=+967.421842706" observedRunningTime="2026-01-22 09:26:39.053345237 +0000 UTC m=+968.897424300" watchObservedRunningTime="2026-01-22 09:26:39.057397047 +0000 UTC m=+968.901476110" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.075828 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-f8rll" podStartSLOduration=25.894970248 podStartE2EDuration="30.075795737s" podCreationTimestamp="2026-01-22 09:26:09 +0000 UTC" firstStartedPulling="2026-01-22 09:26:31.774486923 +0000 UTC m=+961.618565986" lastFinishedPulling="2026-01-22 09:26:35.955312392 +0000 UTC m=+965.799391475" observedRunningTime="2026-01-22 09:26:39.072315242 +0000 UTC m=+968.916394305" watchObservedRunningTime="2026-01-22 09:26:39.075795737 +0000 UTC m=+968.919874800" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.947319 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerStarted","Data":"7ed12bbf609d970206b09593c0413534f26a446a0b98394b2dc814b8a8a30bae"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.964379 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerStarted","Data":"e12865381cb945841151c166ce8409388b577e2309887b7a7d14b0a57d0c57d0"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.968230 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x9dnj" podStartSLOduration=15.472442834 podStartE2EDuration="21.96821827s" podCreationTimestamp="2026-01-22 09:26:18 +0000 UTC" firstStartedPulling="2026-01-22 09:26:32.772348488 +0000 UTC m=+962.616427551" lastFinishedPulling="2026-01-22 09:26:39.268123924 +0000 UTC m=+969.112202987" observedRunningTime="2026-01-22 09:26:39.965432101 +0000 UTC m=+969.809511164" watchObservedRunningTime="2026-01-22 09:26:39.96821827 +0000 UTC m=+969.812297333" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.981492 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" event={"ID":"d4ebb0c2-1567-423e-ac41-69ffffbe396e","Type":"ContainerStarted","Data":"3b557135410561259b57f87f4281ffc1b70b4be82060e3bd7ac7f4fd2e0bb37b"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.982158 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.984031 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" event={"ID":"9b67e5e8-090f-4674-9f29-f2586f095fd5","Type":"ContainerStarted","Data":"0ef538c5c9a3cf6e0b8758b8c46aca57a830e3954d92beae562bc8537552d9ea"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.984425 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.988334 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55tz7" podStartSLOduration=21.444960891 podStartE2EDuration="27.988322422s" podCreationTimestamp="2026-01-22 09:26:12 +0000 UTC" firstStartedPulling="2026-01-22 09:26:32.768824702 +0000 UTC m=+962.612903765" lastFinishedPulling="2026-01-22 09:26:39.312186233 +0000 UTC m=+969.156265296" observedRunningTime="2026-01-22 09:26:39.987544873 +0000 UTC m=+969.831623936" watchObservedRunningTime="2026-01-22 09:26:39.988322422 +0000 UTC m=+969.832401485" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.997695 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snr9q" event={"ID":"2d7ca514-734a-4ab1-890f-b04a1549c073","Type":"ContainerStarted","Data":"872adf1496ba977fb7495e91581380cbee5d9d6d32c5712dfda18a57c0f82e58"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.997734 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snr9q" event={"ID":"2d7ca514-734a-4ab1-890f-b04a1549c073","Type":"ContainerStarted","Data":"deb6b946fb02a13ab7a9681acb1a519f937fb404d811e9ca4020479b78af97de"} Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.998406 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:39 crc kubenswrapper[4892]: I0122 09:26:39.998430 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:26:40 crc kubenswrapper[4892]: I0122 09:26:40.010886 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" podStartSLOduration=6.087182807 podStartE2EDuration="7.010871684s" podCreationTimestamp="2026-01-22 09:26:33 +0000 UTC" firstStartedPulling="2026-01-22 09:26:36.714994506 +0000 UTC m=+966.559073579" lastFinishedPulling="2026-01-22 09:26:37.638683383 +0000 UTC m=+967.482762456" observedRunningTime="2026-01-22 09:26:40.008212709 +0000 UTC m=+969.852291802" watchObservedRunningTime="2026-01-22 09:26:40.010871684 +0000 UTC m=+969.854950747" Jan 22 09:26:40 crc kubenswrapper[4892]: I0122 09:26:40.063864 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" podStartSLOduration=6.2225307579999996 podStartE2EDuration="7.06384276s" podCreationTimestamp="2026-01-22 09:26:33 +0000 UTC" firstStartedPulling="2026-01-22 09:26:36.21019734 +0000 UTC m=+966.054276403" lastFinishedPulling="2026-01-22 09:26:37.051509332 +0000 UTC m=+966.895588405" observedRunningTime="2026-01-22 09:26:40.035161728 +0000 UTC m=+969.879240791" watchObservedRunningTime="2026-01-22 09:26:40.06384276 +0000 UTC m=+969.907921823" Jan 22 09:26:40 crc kubenswrapper[4892]: I0122 09:26:40.066973 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-snr9q" podStartSLOduration=27.197587931 podStartE2EDuration="31.066966537s" podCreationTimestamp="2026-01-22 09:26:09 +0000 UTC" firstStartedPulling="2026-01-22 09:26:32.085897065 +0000 UTC m=+961.929976118" lastFinishedPulling="2026-01-22 09:26:35.955275671 +0000 UTC m=+965.799354724" observedRunningTime="2026-01-22 09:26:40.055768753 +0000 UTC m=+969.899847816" watchObservedRunningTime="2026-01-22 09:26:40.066966537 +0000 UTC m=+969.911045590" Jan 22 09:26:43 crc kubenswrapper[4892]: I0122 09:26:43.210023 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:43 crc kubenswrapper[4892]: I0122 09:26:43.210547 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:44 crc kubenswrapper[4892]: I0122 09:26:44.259513 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55tz7" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="registry-server" probeResult="failure" output=< Jan 22 09:26:44 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:26:44 crc kubenswrapper[4892]: > Jan 22 09:26:44 crc kubenswrapper[4892]: I0122 09:26:44.767906 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:44 crc kubenswrapper[4892]: I0122 09:26:44.767952 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:44 crc kubenswrapper[4892]: I0122 09:26:44.826551 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:45 crc kubenswrapper[4892]: I0122 09:26:45.087030 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:45 crc kubenswrapper[4892]: I0122 09:26:45.964116 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zc7cb"] Jan 22 09:26:45 crc kubenswrapper[4892]: I0122 09:26:45.966223 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:45 crc kubenswrapper[4892]: I0122 09:26:45.973944 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc7cb"] Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.040009 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74af166d-c2f0-43b1-a516-e1d393e873b4","Type":"ContainerStarted","Data":"39f311600fcaef7d33e5f89dfe336dbdb841f1dc3371d9058889b0647549ac85"} Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.041236 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5dcc844d-f681-4c5c-acb5-0edc57e32a0f","Type":"ContainerStarted","Data":"5e3a97257ec7f4291d0c1464e13bebc0539fcc0d2fc18677daa71a07e82aa3fd"} Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.042621 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ed3e46d9-e9ca-453a-92a3-a07471597296","Type":"ContainerStarted","Data":"3e24b4fcad4adf5907dbd7a6ed30ea383571134b5e618527fe34e77102b9c8cf"} Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.043848 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa34c3fd-3e21-49ac-becd-283928666ff2","Type":"ContainerStarted","Data":"078153d60e74cea83b900acfe4c96e2025b0da5ae94dbdaaeb3c0850f2a70a40"} Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.045196 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jh6lw" event={"ID":"c16202bf-0e93-4bb8-96fb-cf6537ea21e6","Type":"ContainerStarted","Data":"e9442115c22979dd404489c3adf32405e5392ed07a7855ca37f57c2670c084fc"} Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.060449 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.691106146 podStartE2EDuration="38.060430492s" podCreationTimestamp="2026-01-22 09:26:08 +0000 UTC" firstStartedPulling="2026-01-22 09:26:32.33082723 +0000 UTC m=+962.174906293" lastFinishedPulling="2026-01-22 09:26:44.700151576 +0000 UTC m=+974.544230639" observedRunningTime="2026-01-22 09:26:46.055152456 +0000 UTC m=+975.899231519" watchObservedRunningTime="2026-01-22 09:26:46.060430492 +0000 UTC m=+975.904509555" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.097593 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jh6lw" podStartSLOduration=4.330268611 podStartE2EDuration="13.097567993s" podCreationTimestamp="2026-01-22 09:26:33 +0000 UTC" firstStartedPulling="2026-01-22 09:26:36.720426209 +0000 UTC m=+966.564505282" lastFinishedPulling="2026-01-22 09:26:45.487725601 +0000 UTC m=+975.331804664" observedRunningTime="2026-01-22 09:26:46.090915443 +0000 UTC m=+975.934994506" watchObservedRunningTime="2026-01-22 09:26:46.097567993 +0000 UTC m=+975.941647056" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.119620 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgsqv\" (UniqueName: \"kubernetes.io/projected/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-kube-api-access-tgsqv\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.119827 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-utilities\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.119903 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-catalog-content\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.132643 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.151310 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.248649154 podStartE2EDuration="35.15127547s" podCreationTimestamp="2026-01-22 09:26:11 +0000 UTC" firstStartedPulling="2026-01-22 09:26:31.864582768 +0000 UTC m=+961.708661831" lastFinishedPulling="2026-01-22 09:26:44.767209084 +0000 UTC m=+974.611288147" observedRunningTime="2026-01-22 09:26:46.144224611 +0000 UTC m=+975.988303674" watchObservedRunningTime="2026-01-22 09:26:46.15127547 +0000 UTC m=+975.995354533" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.185625 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.221846 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgsqv\" (UniqueName: \"kubernetes.io/projected/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-kube-api-access-tgsqv\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.222055 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-utilities\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.222168 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-catalog-content\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.223259 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-catalog-content\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.223974 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-utilities\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.260782 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgsqv\" (UniqueName: \"kubernetes.io/projected/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-kube-api-access-tgsqv\") pod \"redhat-marketplace-zc7cb\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.280261 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.291639 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.323393 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.323797 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:26:46 crc kubenswrapper[4892]: I0122 09:26:46.807762 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc7cb"] Jan 22 09:26:46 crc kubenswrapper[4892]: W0122 09:26:46.811087 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3af1ce_0cf1_450b_a417_698b7f2f6ace.slice/crio-02b574edde9b0e718c121b5236d8901130b0133ae72449208288c75ee2d0a07b WatchSource:0}: Error finding container 02b574edde9b0e718c121b5236d8901130b0133ae72449208288c75ee2d0a07b: Status 404 returned error can't find the container with id 02b574edde9b0e718c121b5236d8901130b0133ae72449208288c75ee2d0a07b Jan 22 09:26:47 crc kubenswrapper[4892]: I0122 09:26:47.055320 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f","Type":"ContainerStarted","Data":"5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c"} Jan 22 09:26:47 crc kubenswrapper[4892]: I0122 09:26:47.056943 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc7cb" event={"ID":"2f3af1ce-0cf1-450b-a417-698b7f2f6ace","Type":"ContainerStarted","Data":"02b574edde9b0e718c121b5236d8901130b0133ae72449208288c75ee2d0a07b"} Jan 22 09:26:47 crc kubenswrapper[4892]: I0122 09:26:47.059325 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3106222-75cd-4011-a7d0-33a3d39e3f0c","Type":"ContainerStarted","Data":"9e7e5c6b61a197138f10c0535d765b391add3e2e041ba0cdac992275d295244f"} Jan 22 09:26:47 crc kubenswrapper[4892]: I0122 09:26:47.062553 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:47 crc kubenswrapper[4892]: I0122 09:26:47.113920 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.066794 4892 generic.go:334] "Generic (PLEG): container finished" podID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerID="973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42" exitCode=0 Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.066841 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc7cb" event={"ID":"2f3af1ce-0cf1-450b-a417-698b7f2f6ace","Type":"ContainerDied","Data":"973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42"} Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.068828 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d1581ad3-031b-451b-a8a7-bea327cf4ecd","Type":"ContainerStarted","Data":"e38b8618d6851c4201fb59715c0d4f8488823d577ea7377124878cd42ef52717"} Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.069083 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.116970 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.681275543 podStartE2EDuration="45.116954354s" podCreationTimestamp="2026-01-22 09:26:03 +0000 UTC" firstStartedPulling="2026-01-22 09:26:04.49799086 +0000 UTC m=+934.342069923" lastFinishedPulling="2026-01-22 09:26:46.933669671 +0000 UTC m=+976.777748734" observedRunningTime="2026-01-22 09:26:48.112761833 +0000 UTC m=+977.956840886" watchObservedRunningTime="2026-01-22 09:26:48.116954354 +0000 UTC m=+977.961033417" Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.361963 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsksf"] Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.362254 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dsksf" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="registry-server" containerID="cri-o://7ab0c122ca045b4851d2361c00985396f6a69250f33605485d75150d6de26677" gracePeriod=2 Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.568575 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.789459 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.834624 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9"] Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.924787 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:48 crc kubenswrapper[4892]: I0122 09:26:48.961137 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.079481 4892 generic.go:334] "Generic (PLEG): container finished" podID="1e032087-c02e-4c24-a634-4f202dc6921b" containerID="7ab0c122ca045b4851d2361c00985396f6a69250f33605485d75150d6de26677" exitCode=0 Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.079509 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsksf" event={"ID":"1e032087-c02e-4c24-a634-4f202dc6921b","Type":"ContainerDied","Data":"7ab0c122ca045b4851d2361c00985396f6a69250f33605485d75150d6de26677"} Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.080820 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerName="dnsmasq-dns" containerID="cri-o://0ef538c5c9a3cf6e0b8758b8c46aca57a830e3954d92beae562bc8537552d9ea" gracePeriod=10 Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.081268 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.120247 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.212112 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.212163 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.261369 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.348976 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.351226 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.355456 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jw824" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.355489 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.355551 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.355627 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.359424 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439191 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d16687-ebb2-43f7-bdf4-04334f5895d7-config\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439258 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90d16687-ebb2-43f7-bdf4-04334f5895d7-scripts\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439397 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6gw9\" (UniqueName: \"kubernetes.io/projected/90d16687-ebb2-43f7-bdf4-04334f5895d7-kube-api-access-p6gw9\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439468 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90d16687-ebb2-43f7-bdf4-04334f5895d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439760 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439795 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.439836 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541520 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d16687-ebb2-43f7-bdf4-04334f5895d7-config\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541575 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90d16687-ebb2-43f7-bdf4-04334f5895d7-scripts\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541616 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6gw9\" (UniqueName: \"kubernetes.io/projected/90d16687-ebb2-43f7-bdf4-04334f5895d7-kube-api-access-p6gw9\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541655 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90d16687-ebb2-43f7-bdf4-04334f5895d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541675 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541691 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.541711 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.542684 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d16687-ebb2-43f7-bdf4-04334f5895d7-config\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.543129 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90d16687-ebb2-43f7-bdf4-04334f5895d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.543351 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90d16687-ebb2-43f7-bdf4-04334f5895d7-scripts\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.549043 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.549578 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.549753 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d16687-ebb2-43f7-bdf4-04334f5895d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.560333 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6gw9\" (UniqueName: \"kubernetes.io/projected/90d16687-ebb2-43f7-bdf4-04334f5895d7-kube-api-access-p6gw9\") pod \"ovn-northd-0\" (UID: \"90d16687-ebb2-43f7-bdf4-04334f5895d7\") " pod="openstack/ovn-northd-0" Jan 22 09:26:49 crc kubenswrapper[4892]: I0122 09:26:49.695944 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 09:26:50 crc kubenswrapper[4892]: I0122 09:26:50.087419 4892 generic.go:334] "Generic (PLEG): container finished" podID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerID="0ef538c5c9a3cf6e0b8758b8c46aca57a830e3954d92beae562bc8537552d9ea" exitCode=0 Jan 22 09:26:50 crc kubenswrapper[4892]: I0122 09:26:50.087491 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" event={"ID":"9b67e5e8-090f-4674-9f29-f2586f095fd5","Type":"ContainerDied","Data":"0ef538c5c9a3cf6e0b8758b8c46aca57a830e3954d92beae562bc8537552d9ea"} Jan 22 09:26:50 crc kubenswrapper[4892]: I0122 09:26:50.130834 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 09:26:50 crc kubenswrapper[4892]: I0122 09:26:50.150747 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:51 crc kubenswrapper[4892]: I0122 09:26:51.095077 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"90d16687-ebb2-43f7-bdf4-04334f5895d7","Type":"ContainerStarted","Data":"19cf468cf88f1e31b64149b33b219ad24ff13f44647cfcb9f6a4f0716913d3af"} Jan 22 09:26:51 crc kubenswrapper[4892]: I0122 09:26:51.752718 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9dnj"] Jan 22 09:26:52 crc kubenswrapper[4892]: I0122 09:26:52.101603 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x9dnj" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="registry-server" containerID="cri-o://7ed12bbf609d970206b09593c0413534f26a446a0b98394b2dc814b8a8a30bae" gracePeriod=2 Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.111989 4892 generic.go:334] "Generic (PLEG): container finished" podID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerID="7ed12bbf609d970206b09593c0413534f26a446a0b98394b2dc814b8a8a30bae" exitCode=0 Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.112415 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerDied","Data":"7ed12bbf609d970206b09593c0413534f26a446a0b98394b2dc814b8a8a30bae"} Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.115207 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsksf" event={"ID":"1e032087-c02e-4c24-a634-4f202dc6921b","Type":"ContainerDied","Data":"2ff495a25b81b6df687959f63781838ae2c2634bc145972149699da718be3d58"} Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.115237 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff495a25b81b6df687959f63781838ae2c2634bc145972149699da718be3d58" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.215679 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.222770 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.281989 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.313686 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92fqc\" (UniqueName: \"kubernetes.io/projected/9b67e5e8-090f-4674-9f29-f2586f095fd5-kube-api-access-92fqc\") pod \"9b67e5e8-090f-4674-9f29-f2586f095fd5\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.313761 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vgqb\" (UniqueName: \"kubernetes.io/projected/1e032087-c02e-4c24-a634-4f202dc6921b-kube-api-access-4vgqb\") pod \"1e032087-c02e-4c24-a634-4f202dc6921b\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.313838 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-ovsdbserver-nb\") pod \"9b67e5e8-090f-4674-9f29-f2586f095fd5\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.313903 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-utilities\") pod \"1e032087-c02e-4c24-a634-4f202dc6921b\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.313980 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-config\") pod \"9b67e5e8-090f-4674-9f29-f2586f095fd5\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.314037 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-catalog-content\") pod \"1e032087-c02e-4c24-a634-4f202dc6921b\" (UID: \"1e032087-c02e-4c24-a634-4f202dc6921b\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.314059 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-dns-svc\") pod \"9b67e5e8-090f-4674-9f29-f2586f095fd5\" (UID: \"9b67e5e8-090f-4674-9f29-f2586f095fd5\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.315097 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-utilities" (OuterVolumeSpecName: "utilities") pod "1e032087-c02e-4c24-a634-4f202dc6921b" (UID: "1e032087-c02e-4c24-a634-4f202dc6921b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.329680 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.330207 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b67e5e8-090f-4674-9f29-f2586f095fd5-kube-api-access-92fqc" (OuterVolumeSpecName: "kube-api-access-92fqc") pod "9b67e5e8-090f-4674-9f29-f2586f095fd5" (UID: "9b67e5e8-090f-4674-9f29-f2586f095fd5"). InnerVolumeSpecName "kube-api-access-92fqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.337062 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.339483 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e032087-c02e-4c24-a634-4f202dc6921b-kube-api-access-4vgqb" (OuterVolumeSpecName: "kube-api-access-4vgqb") pod "1e032087-c02e-4c24-a634-4f202dc6921b" (UID: "1e032087-c02e-4c24-a634-4f202dc6921b"). InnerVolumeSpecName "kube-api-access-4vgqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.358249 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-config" (OuterVolumeSpecName: "config") pod "9b67e5e8-090f-4674-9f29-f2586f095fd5" (UID: "9b67e5e8-090f-4674-9f29-f2586f095fd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.374628 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b67e5e8-090f-4674-9f29-f2586f095fd5" (UID: "9b67e5e8-090f-4674-9f29-f2586f095fd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.376633 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e032087-c02e-4c24-a634-4f202dc6921b" (UID: "1e032087-c02e-4c24-a634-4f202dc6921b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.392335 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b67e5e8-090f-4674-9f29-f2586f095fd5" (UID: "9b67e5e8-090f-4674-9f29-f2586f095fd5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.415965 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-catalog-content\") pod \"ea8e907a-3cb4-4368-8c4e-c312f537af37\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416044 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnql6\" (UniqueName: \"kubernetes.io/projected/ea8e907a-3cb4-4368-8c4e-c312f537af37-kube-api-access-dnql6\") pod \"ea8e907a-3cb4-4368-8c4e-c312f537af37\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416085 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-utilities\") pod \"ea8e907a-3cb4-4368-8c4e-c312f537af37\" (UID: \"ea8e907a-3cb4-4368-8c4e-c312f537af37\") " Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416530 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416550 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416560 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416569 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92fqc\" (UniqueName: \"kubernetes.io/projected/9b67e5e8-090f-4674-9f29-f2586f095fd5-kube-api-access-92fqc\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416578 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vgqb\" (UniqueName: \"kubernetes.io/projected/1e032087-c02e-4c24-a634-4f202dc6921b-kube-api-access-4vgqb\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416585 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b67e5e8-090f-4674-9f29-f2586f095fd5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.416594 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e032087-c02e-4c24-a634-4f202dc6921b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.418981 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-utilities" (OuterVolumeSpecName: "utilities") pod "ea8e907a-3cb4-4368-8c4e-c312f537af37" (UID: "ea8e907a-3cb4-4368-8c4e-c312f537af37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.420098 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8e907a-3cb4-4368-8c4e-c312f537af37-kube-api-access-dnql6" (OuterVolumeSpecName: "kube-api-access-dnql6") pod "ea8e907a-3cb4-4368-8c4e-c312f537af37" (UID: "ea8e907a-3cb4-4368-8c4e-c312f537af37"). InnerVolumeSpecName "kube-api-access-dnql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.471297 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea8e907a-3cb4-4368-8c4e-c312f537af37" (UID: "ea8e907a-3cb4-4368-8c4e-c312f537af37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.517913 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.517942 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnql6\" (UniqueName: \"kubernetes.io/projected/ea8e907a-3cb4-4368-8c4e-c312f537af37-kube-api-access-dnql6\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.517951 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8e907a-3cb4-4368-8c4e-c312f537af37-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:53 crc kubenswrapper[4892]: I0122 09:26:53.906829 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.123602 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9dnj" event={"ID":"ea8e907a-3cb4-4368-8c4e-c312f537af37","Type":"ContainerDied","Data":"8b32f9cc2c82c533ee79eb23cc330b5c0440acb3d62498043634c0c2214cb9b3"} Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.123633 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9dnj" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.123676 4892 scope.go:117] "RemoveContainer" containerID="7ed12bbf609d970206b09593c0413534f26a446a0b98394b2dc814b8a8a30bae" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.127132 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" event={"ID":"9b67e5e8-090f-4674-9f29-f2586f095fd5","Type":"ContainerDied","Data":"8ca536a5093c19f12537b096ed2bcf237c11af3e677b79d4b176796fa6d55c09"} Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.127166 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.127216 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsksf" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.152646 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsksf"] Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.159490 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dsksf"] Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.171935 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9"] Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.179632 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bbdc7ccd7-9gqb9"] Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.184552 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9dnj"] Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.195734 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x9dnj"] Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.238519 4892 scope.go:117] "RemoveContainer" containerID="b86f6bc449c6857c65f934a4d61fe8390d8e8d10bacc96b6dea3a0cd4515b0e0" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.261641 4892 scope.go:117] "RemoveContainer" containerID="c7e40fc9e301d236ab5b5f1b1f94855de540b5e435a4c60b814875c7116a968a" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.288167 4892 scope.go:117] "RemoveContainer" containerID="0ef538c5c9a3cf6e0b8758b8c46aca57a830e3954d92beae562bc8537552d9ea" Jan 22 09:26:54 crc kubenswrapper[4892]: I0122 09:26:54.324930 4892 scope.go:117] "RemoveContainer" containerID="bb17adfc46d7c899719c97a7a025b60d06208e347ab0746909c3fd6d5f9aae3b" Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.135809 4892 generic.go:334] "Generic (PLEG): container finished" podID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerID="be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d" exitCode=0 Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.135902 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc7cb" event={"ID":"2f3af1ce-0cf1-450b-a417-698b7f2f6ace","Type":"ContainerDied","Data":"be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d"} Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.148719 4892 generic.go:334] "Generic (PLEG): container finished" podID="5dcc844d-f681-4c5c-acb5-0edc57e32a0f" containerID="5e3a97257ec7f4291d0c1464e13bebc0539fcc0d2fc18677daa71a07e82aa3fd" exitCode=0 Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.148794 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5dcc844d-f681-4c5c-acb5-0edc57e32a0f","Type":"ContainerDied","Data":"5e3a97257ec7f4291d0c1464e13bebc0539fcc0d2fc18677daa71a07e82aa3fd"} Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.435796 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" path="/var/lib/kubelet/pods/1e032087-c02e-4c24-a634-4f202dc6921b/volumes" Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.436927 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" path="/var/lib/kubelet/pods/9b67e5e8-090f-4674-9f29-f2586f095fd5/volumes" Jan 22 09:26:55 crc kubenswrapper[4892]: I0122 09:26:55.437767 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" path="/var/lib/kubelet/pods/ea8e907a-3cb4-4368-8c4e-c312f537af37/volumes" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.162020 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc7cb" event={"ID":"2f3af1ce-0cf1-450b-a417-698b7f2f6ace","Type":"ContainerStarted","Data":"6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539"} Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.164373 4892 generic.go:334] "Generic (PLEG): container finished" podID="aa34c3fd-3e21-49ac-becd-283928666ff2" containerID="078153d60e74cea83b900acfe4c96e2025b0da5ae94dbdaaeb3c0850f2a70a40" exitCode=0 Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.164430 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa34c3fd-3e21-49ac-becd-283928666ff2","Type":"ContainerDied","Data":"078153d60e74cea83b900acfe4c96e2025b0da5ae94dbdaaeb3c0850f2a70a40"} Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.169098 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5dcc844d-f681-4c5c-acb5-0edc57e32a0f","Type":"ContainerStarted","Data":"c19faa4c72c79d815399cb7a9ab896a1b347fc09780905b17016d1f08f0b9f50"} Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.171817 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"90d16687-ebb2-43f7-bdf4-04334f5895d7","Type":"ContainerStarted","Data":"c973c25096f0adc0ff8f4dc186e76bd0c8a08d32ab35f2923e682710128b2bf8"} Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.171855 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"90d16687-ebb2-43f7-bdf4-04334f5895d7","Type":"ContainerStarted","Data":"ef64b35c90a0d8053137e77ee97d0bc2798a92bdaabd615cf5f5f50087f077fd"} Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.172065 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.233216 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zc7cb" podStartSLOduration=3.766375536 podStartE2EDuration="11.233194665s" podCreationTimestamp="2026-01-22 09:26:45 +0000 UTC" firstStartedPulling="2026-01-22 09:26:48.068478771 +0000 UTC m=+977.912557834" lastFinishedPulling="2026-01-22 09:26:55.5352979 +0000 UTC m=+985.379376963" observedRunningTime="2026-01-22 09:26:56.202274873 +0000 UTC m=+986.046354006" watchObservedRunningTime="2026-01-22 09:26:56.233194665 +0000 UTC m=+986.077273728" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.234115 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.509654384 podStartE2EDuration="7.234107417s" podCreationTimestamp="2026-01-22 09:26:49 +0000 UTC" firstStartedPulling="2026-01-22 09:26:50.14409929 +0000 UTC m=+979.988178353" lastFinishedPulling="2026-01-22 09:26:54.868552323 +0000 UTC m=+984.712631386" observedRunningTime="2026-01-22 09:26:56.227314944 +0000 UTC m=+986.071394007" watchObservedRunningTime="2026-01-22 09:26:56.234107417 +0000 UTC m=+986.078186480" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.282115 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.282167 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.312934 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.987903056 podStartE2EDuration="54.312910016s" podCreationTimestamp="2026-01-22 09:26:02 +0000 UTC" firstStartedPulling="2026-01-22 09:26:04.175796704 +0000 UTC m=+934.019875767" lastFinishedPulling="2026-01-22 09:26:45.500803624 +0000 UTC m=+975.344882727" observedRunningTime="2026-01-22 09:26:56.30764248 +0000 UTC m=+986.151721553" watchObservedRunningTime="2026-01-22 09:26:56.312910016 +0000 UTC m=+986.156989079" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.385755 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-4flns"] Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386352 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerName="dnsmasq-dns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386372 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerName="dnsmasq-dns" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386402 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="extract-content" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386409 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="extract-content" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386424 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="extract-utilities" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386431 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="extract-utilities" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386445 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="registry-server" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386453 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="registry-server" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386466 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="extract-content" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386473 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="extract-content" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386488 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerName="init" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386495 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerName="init" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386501 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="extract-utilities" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386506 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="extract-utilities" Jan 22 09:26:56 crc kubenswrapper[4892]: E0122 09:26:56.386517 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="registry-server" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386523 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="registry-server" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386686 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e032087-c02e-4c24-a634-4f202dc6921b" containerName="registry-server" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386703 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8e907a-3cb4-4368-8c4e-c312f537af37" containerName="registry-server" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.386718 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b67e5e8-090f-4674-9f29-f2586f095fd5" containerName="dnsmasq-dns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.390180 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.394907 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-4flns"] Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.473082 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-config\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.473141 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.473213 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmdw\" (UniqueName: \"kubernetes.io/projected/d66d181d-3705-4a58-8091-2bef209b355c-kube-api-access-jcmdw\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.473259 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.473340 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.574521 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.574618 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.574656 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-config\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.574678 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.574721 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmdw\" (UniqueName: \"kubernetes.io/projected/d66d181d-3705-4a58-8091-2bef209b355c-kube-api-access-jcmdw\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.575814 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.576248 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-config\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.576305 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.576300 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.593371 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmdw\" (UniqueName: \"kubernetes.io/projected/d66d181d-3705-4a58-8091-2bef209b355c-kube-api-access-jcmdw\") pod \"dnsmasq-dns-6cb545bd4c-4flns\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.729924 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.956061 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55tz7"] Jan 22 09:26:56 crc kubenswrapper[4892]: I0122 09:26:56.956768 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55tz7" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="registry-server" containerID="cri-o://e12865381cb945841151c166ce8409388b577e2309887b7a7d14b0a57d0c57d0" gracePeriod=2 Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.180063 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa34c3fd-3e21-49ac-becd-283928666ff2","Type":"ContainerStarted","Data":"1a7645e74a8c249bb11a5f9096d7548a7a0f246212a4732ebec91246acd195a8"} Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.182895 4892 generic.go:334] "Generic (PLEG): container finished" podID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerID="e12865381cb945841151c166ce8409388b577e2309887b7a7d14b0a57d0c57d0" exitCode=0 Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.182962 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerDied","Data":"e12865381cb945841151c166ce8409388b577e2309887b7a7d14b0a57d0c57d0"} Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.193991 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-4flns"] Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.212756 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.593144776 podStartE2EDuration="57.212736911s" podCreationTimestamp="2026-01-22 09:26:00 +0000 UTC" firstStartedPulling="2026-01-22 09:26:02.881169088 +0000 UTC m=+932.725248151" lastFinishedPulling="2026-01-22 09:26:45.500761223 +0000 UTC m=+975.344840286" observedRunningTime="2026-01-22 09:26:57.206986844 +0000 UTC m=+987.051065907" watchObservedRunningTime="2026-01-22 09:26:57.212736911 +0000 UTC m=+987.056815974" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.366075 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.380446 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zc7cb" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="registry-server" probeResult="failure" output=< Jan 22 09:26:57 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:26:57 crc kubenswrapper[4892]: > Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.386469 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-utilities\") pod \"0eca1e45-30a1-46b6-9683-fd48782a4aea\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.386536 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvlf5\" (UniqueName: \"kubernetes.io/projected/0eca1e45-30a1-46b6-9683-fd48782a4aea-kube-api-access-jvlf5\") pod \"0eca1e45-30a1-46b6-9683-fd48782a4aea\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.386578 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-catalog-content\") pod \"0eca1e45-30a1-46b6-9683-fd48782a4aea\" (UID: \"0eca1e45-30a1-46b6-9683-fd48782a4aea\") " Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.387890 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-utilities" (OuterVolumeSpecName: "utilities") pod "0eca1e45-30a1-46b6-9683-fd48782a4aea" (UID: "0eca1e45-30a1-46b6-9683-fd48782a4aea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.398663 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eca1e45-30a1-46b6-9683-fd48782a4aea-kube-api-access-jvlf5" (OuterVolumeSpecName: "kube-api-access-jvlf5") pod "0eca1e45-30a1-46b6-9683-fd48782a4aea" (UID: "0eca1e45-30a1-46b6-9683-fd48782a4aea"). InnerVolumeSpecName "kube-api-access-jvlf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.489432 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.489468 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvlf5\" (UniqueName: \"kubernetes.io/projected/0eca1e45-30a1-46b6-9683-fd48782a4aea-kube-api-access-jvlf5\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.495100 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eca1e45-30a1-46b6-9683-fd48782a4aea" (UID: "0eca1e45-30a1-46b6-9683-fd48782a4aea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.542087 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.542406 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="extract-content" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.542421 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="extract-content" Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.542442 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="registry-server" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.542450 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="registry-server" Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.542463 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="extract-utilities" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.542469 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="extract-utilities" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.542629 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" containerName="registry-server" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.563499 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.574831 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.579129 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.579596 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.579717 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-g4cmq" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.591745 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0c2888e-984a-482d-b7a3-5de66720aaf8-lock\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.591827 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0c2888e-984a-482d-b7a3-5de66720aaf8-cache\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.591873 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c2888e-984a-482d-b7a3-5de66720aaf8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.591897 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mt4\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-kube-api-access-g2mt4\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.591928 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.591985 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.592065 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eca1e45-30a1-46b6-9683-fd48782a4aea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.682402 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693071 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693164 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693202 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0c2888e-984a-482d-b7a3-5de66720aaf8-lock\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693236 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0c2888e-984a-482d-b7a3-5de66720aaf8-cache\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.693262 4892 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.693322 4892 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.693387 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift podName:d0c2888e-984a-482d-b7a3-5de66720aaf8 nodeName:}" failed. No retries permitted until 2026-01-22 09:26:58.193364816 +0000 UTC m=+988.037443979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift") pod "swift-storage-0" (UID: "d0c2888e-984a-482d-b7a3-5de66720aaf8") : configmap "swift-ring-files" not found Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693275 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c2888e-984a-482d-b7a3-5de66720aaf8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693526 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2mt4\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-kube-api-access-g2mt4\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693569 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.693948 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0c2888e-984a-482d-b7a3-5de66720aaf8-lock\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.694264 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0c2888e-984a-482d-b7a3-5de66720aaf8-cache\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.700275 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c2888e-984a-482d-b7a3-5de66720aaf8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.724106 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2mt4\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-kube-api-access-g2mt4\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.728318 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.769785 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-g95bg"] Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.770711 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.772179 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.772187 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.772261 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.782000 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-g95bg"] Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.792932 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-g95bg"] Jan 22 09:26:57 crc kubenswrapper[4892]: E0122 09:26:57.793505 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-86rm8 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-g95bg" podUID="69399c6e-eab3-4132-a9eb-d02682014d76" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794030 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69399c6e-eab3-4132-a9eb-d02682014d76-etc-swift\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794062 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-scripts\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794094 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-ring-data-devices\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794129 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-combined-ca-bundle\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794149 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-dispersionconf\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794170 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-swiftconf\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.794189 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86rm8\" (UniqueName: \"kubernetes.io/projected/69399c6e-eab3-4132-a9eb-d02682014d76-kube-api-access-86rm8\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.806697 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-x9lwk"] Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.807608 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.828401 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x9lwk"] Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894663 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-etc-swift\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894701 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-swiftconf\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894751 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4r6h\" (UniqueName: \"kubernetes.io/projected/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-kube-api-access-k4r6h\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894776 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-scripts\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894836 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69399c6e-eab3-4132-a9eb-d02682014d76-etc-swift\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894862 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-scripts\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894891 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-ring-data-devices\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894933 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-ring-data-devices\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.894958 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-dispersionconf\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895211 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69399c6e-eab3-4132-a9eb-d02682014d76-etc-swift\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895646 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-scripts\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895649 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-ring-data-devices\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895720 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-combined-ca-bundle\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895771 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-dispersionconf\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895805 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-swiftconf\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895825 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86rm8\" (UniqueName: \"kubernetes.io/projected/69399c6e-eab3-4132-a9eb-d02682014d76-kube-api-access-86rm8\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.895848 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-combined-ca-bundle\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.898704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-combined-ca-bundle\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.898997 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-swiftconf\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.899951 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-dispersionconf\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.912047 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86rm8\" (UniqueName: \"kubernetes.io/projected/69399c6e-eab3-4132-a9eb-d02682014d76-kube-api-access-86rm8\") pod \"swift-ring-rebalance-g95bg\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.996448 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-ring-data-devices\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.996507 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-dispersionconf\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.996561 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-combined-ca-bundle\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.996617 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-etc-swift\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.996702 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-swiftconf\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.996810 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4r6h\" (UniqueName: \"kubernetes.io/projected/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-kube-api-access-k4r6h\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.997116 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-etc-swift\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.997158 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-ring-data-devices\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.997184 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-scripts\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.997862 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-scripts\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:57 crc kubenswrapper[4892]: I0122 09:26:57.999839 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-dispersionconf\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.000302 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-combined-ca-bundle\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.000658 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-swiftconf\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.019173 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4r6h\" (UniqueName: \"kubernetes.io/projected/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-kube-api-access-k4r6h\") pod \"swift-ring-rebalance-x9lwk\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.121014 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.195470 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55tz7" event={"ID":"0eca1e45-30a1-46b6-9683-fd48782a4aea","Type":"ContainerDied","Data":"3066ab9cf4137c322203774183c708bf68c2337c7cb75d4234fd227cbbeb11a0"} Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.195535 4892 scope.go:117] "RemoveContainer" containerID="e12865381cb945841151c166ce8409388b577e2309887b7a7d14b0a57d0c57d0" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.195693 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55tz7" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.198151 4892 generic.go:334] "Generic (PLEG): container finished" podID="d66d181d-3705-4a58-8091-2bef209b355c" containerID="17e66fb14d1b6ec93a3c7e2c1fce3db7e8a71d586e2f3b5a98c2d01b7d580d39" exitCode=0 Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.198210 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" event={"ID":"d66d181d-3705-4a58-8091-2bef209b355c","Type":"ContainerDied","Data":"17e66fb14d1b6ec93a3c7e2c1fce3db7e8a71d586e2f3b5a98c2d01b7d580d39"} Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.198276 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" event={"ID":"d66d181d-3705-4a58-8091-2bef209b355c","Type":"ContainerStarted","Data":"c8e4d7870a229d7623714c1e2f5107cc627b5a09f53ac52a847489dffd01581b"} Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.198351 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.200111 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:58 crc kubenswrapper[4892]: E0122 09:26:58.200266 4892 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 09:26:58 crc kubenswrapper[4892]: E0122 09:26:58.200297 4892 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 09:26:58 crc kubenswrapper[4892]: E0122 09:26:58.200339 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift podName:d0c2888e-984a-482d-b7a3-5de66720aaf8 nodeName:}" failed. No retries permitted until 2026-01-22 09:26:59.200325222 +0000 UTC m=+989.044404285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift") pod "swift-storage-0" (UID: "d0c2888e-984a-482d-b7a3-5de66720aaf8") : configmap "swift-ring-files" not found Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.214272 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.247526 4892 scope.go:117] "RemoveContainer" containerID="ef5a7161854f55e5f7dc2908baf3317327e7198ed58f21f4c29d7fac16f8e9b8" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.270146 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55tz7"] Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.300825 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-ring-data-devices\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.300861 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-scripts\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.300902 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69399c6e-eab3-4132-a9eb-d02682014d76-etc-swift\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.300937 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86rm8\" (UniqueName: \"kubernetes.io/projected/69399c6e-eab3-4132-a9eb-d02682014d76-kube-api-access-86rm8\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.300981 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-dispersionconf\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.301010 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-combined-ca-bundle\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.301044 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-swiftconf\") pod \"69399c6e-eab3-4132-a9eb-d02682014d76\" (UID: \"69399c6e-eab3-4132-a9eb-d02682014d76\") " Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.301232 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.301639 4892 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.302226 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69399c6e-eab3-4132-a9eb-d02682014d76-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.302604 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-scripts" (OuterVolumeSpecName: "scripts") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.302937 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55tz7"] Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.304814 4892 scope.go:117] "RemoveContainer" containerID="2c4dc26e9c7fad02cb5c75fbeed3bb0bc7465259520bba6eeb6c4b1d66d41e3c" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.305755 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.305765 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69399c6e-eab3-4132-a9eb-d02682014d76-kube-api-access-86rm8" (OuterVolumeSpecName: "kube-api-access-86rm8") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "kube-api-access-86rm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.307964 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.308408 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "69399c6e-eab3-4132-a9eb-d02682014d76" (UID: "69399c6e-eab3-4132-a9eb-d02682014d76"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.402631 4892 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.402963 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.402974 4892 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/69399c6e-eab3-4132-a9eb-d02682014d76-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.402985 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69399c6e-eab3-4132-a9eb-d02682014d76-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.402993 4892 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/69399c6e-eab3-4132-a9eb-d02682014d76-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.403002 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86rm8\" (UniqueName: \"kubernetes.io/projected/69399c6e-eab3-4132-a9eb-d02682014d76-kube-api-access-86rm8\") on node \"crc\" DevicePath \"\"" Jan 22 09:26:58 crc kubenswrapper[4892]: I0122 09:26:58.603564 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x9lwk"] Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.209250 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" event={"ID":"d66d181d-3705-4a58-8091-2bef209b355c","Type":"ContainerStarted","Data":"1802c480527504a240f7ff8eec03d66747e8800343ff31f84f31e19dd58c9f50"} Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.210236 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.212728 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:26:59 crc kubenswrapper[4892]: E0122 09:26:59.213036 4892 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 09:26:59 crc kubenswrapper[4892]: E0122 09:26:59.213056 4892 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 09:26:59 crc kubenswrapper[4892]: E0122 09:26:59.213089 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift podName:d0c2888e-984a-482d-b7a3-5de66720aaf8 nodeName:}" failed. No retries permitted until 2026-01-22 09:27:01.213077496 +0000 UTC m=+991.057156559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift") pod "swift-storage-0" (UID: "d0c2888e-984a-482d-b7a3-5de66720aaf8") : configmap "swift-ring-files" not found Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.214980 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9lwk" event={"ID":"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc","Type":"ContainerStarted","Data":"95d206b4c559c1fd95f4cf99ce26863b9fb24a3c8bbd1d177c7b69922a92e53a"} Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.214988 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-g95bg" Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.230044 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" podStartSLOduration=3.230025022 podStartE2EDuration="3.230025022s" podCreationTimestamp="2026-01-22 09:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:26:59.226682222 +0000 UTC m=+989.070761285" watchObservedRunningTime="2026-01-22 09:26:59.230025022 +0000 UTC m=+989.074104085" Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.265863 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-g95bg"] Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.273810 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-g95bg"] Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.444258 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eca1e45-30a1-46b6-9683-fd48782a4aea" path="/var/lib/kubelet/pods/0eca1e45-30a1-46b6-9683-fd48782a4aea/volumes" Jan 22 09:26:59 crc kubenswrapper[4892]: I0122 09:26:59.446391 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69399c6e-eab3-4132-a9eb-d02682014d76" path="/var/lib/kubelet/pods/69399c6e-eab3-4132-a9eb-d02682014d76/volumes" Jan 22 09:27:01 crc kubenswrapper[4892]: I0122 09:27:01.243735 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:27:01 crc kubenswrapper[4892]: E0122 09:27:01.243995 4892 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 09:27:01 crc kubenswrapper[4892]: E0122 09:27:01.244175 4892 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 09:27:01 crc kubenswrapper[4892]: E0122 09:27:01.244232 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift podName:d0c2888e-984a-482d-b7a3-5de66720aaf8 nodeName:}" failed. No retries permitted until 2026-01-22 09:27:05.244215678 +0000 UTC m=+995.088294751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift") pod "swift-storage-0" (UID: "d0c2888e-984a-482d-b7a3-5de66720aaf8") : configmap "swift-ring-files" not found Jan 22 09:27:02 crc kubenswrapper[4892]: I0122 09:27:02.339735 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 09:27:02 crc kubenswrapper[4892]: I0122 09:27:02.340057 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 09:27:02 crc kubenswrapper[4892]: I0122 09:27:02.414273 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.247392 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9lwk" event={"ID":"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc","Type":"ContainerStarted","Data":"ea856c436fe317b662068ef7d89a2247160ce86b17545c6d111ab974808fdb58"} Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.271467 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-x9lwk" podStartSLOduration=2.700660917 podStartE2EDuration="6.271450597s" podCreationTimestamp="2026-01-22 09:26:57 +0000 UTC" firstStartedPulling="2026-01-22 09:26:58.609115394 +0000 UTC m=+988.453194447" lastFinishedPulling="2026-01-22 09:27:02.179905064 +0000 UTC m=+992.023984127" observedRunningTime="2026-01-22 09:27:03.270662888 +0000 UTC m=+993.114741981" watchObservedRunningTime="2026-01-22 09:27:03.271450597 +0000 UTC m=+993.115529660" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.330005 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.501623 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a427-account-create-update-xncm7"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.502817 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.506849 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.513046 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a427-account-create-update-xncm7"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.570511 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-96xfx"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.571531 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.578372 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-96xfx"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.594014 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hxs\" (UniqueName: \"kubernetes.io/projected/e479527f-d4fd-4c96-ab6f-8c205525df26-kube-api-access-s4hxs\") pod \"keystone-a427-account-create-update-xncm7\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.594078 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e479527f-d4fd-4c96-ab6f-8c205525df26-operator-scripts\") pod \"keystone-a427-account-create-update-xncm7\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.640684 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.640725 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.695871 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e479527f-d4fd-4c96-ab6f-8c205525df26-operator-scripts\") pod \"keystone-a427-account-create-update-xncm7\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.696008 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3562a65-6cdb-43f8-972c-32c454f33a14-operator-scripts\") pod \"keystone-db-create-96xfx\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.696035 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxd7h\" (UniqueName: \"kubernetes.io/projected/b3562a65-6cdb-43f8-972c-32c454f33a14-kube-api-access-bxd7h\") pod \"keystone-db-create-96xfx\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.696071 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hxs\" (UniqueName: \"kubernetes.io/projected/e479527f-d4fd-4c96-ab6f-8c205525df26-kube-api-access-s4hxs\") pod \"keystone-a427-account-create-update-xncm7\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.696984 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e479527f-d4fd-4c96-ab6f-8c205525df26-operator-scripts\") pod \"keystone-a427-account-create-update-xncm7\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.718928 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hxs\" (UniqueName: \"kubernetes.io/projected/e479527f-d4fd-4c96-ab6f-8c205525df26-kube-api-access-s4hxs\") pod \"keystone-a427-account-create-update-xncm7\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.744257 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.788356 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cg57h"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.789627 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cg57h" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.795794 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cg57h"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.797884 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3562a65-6cdb-43f8-972c-32c454f33a14-operator-scripts\") pod \"keystone-db-create-96xfx\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.797925 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxd7h\" (UniqueName: \"kubernetes.io/projected/b3562a65-6cdb-43f8-972c-32c454f33a14-kube-api-access-bxd7h\") pod \"keystone-db-create-96xfx\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.798726 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3562a65-6cdb-43f8-972c-32c454f33a14-operator-scripts\") pod \"keystone-db-create-96xfx\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.827811 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxd7h\" (UniqueName: \"kubernetes.io/projected/b3562a65-6cdb-43f8-972c-32c454f33a14-kube-api-access-bxd7h\") pod \"keystone-db-create-96xfx\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.831385 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.901812 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d54a-account-create-update-zv67x"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.902909 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.904300 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8l7\" (UniqueName: \"kubernetes.io/projected/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-kube-api-access-tf8l7\") pod \"placement-db-create-cg57h\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " pod="openstack/placement-db-create-cg57h" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.904396 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-operator-scripts\") pod \"placement-db-create-cg57h\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " pod="openstack/placement-db-create-cg57h" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.904679 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.914138 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d54a-account-create-update-zv67x"] Jan 22 09:27:03 crc kubenswrapper[4892]: I0122 09:27:03.919543 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.007130 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvg7\" (UniqueName: \"kubernetes.io/projected/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-kube-api-access-gqvg7\") pod \"placement-d54a-account-create-update-zv67x\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.007183 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8l7\" (UniqueName: \"kubernetes.io/projected/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-kube-api-access-tf8l7\") pod \"placement-db-create-cg57h\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " pod="openstack/placement-db-create-cg57h" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.007212 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-operator-scripts\") pod \"placement-d54a-account-create-update-zv67x\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.007253 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-operator-scripts\") pod \"placement-db-create-cg57h\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " pod="openstack/placement-db-create-cg57h" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.007970 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-operator-scripts\") pod \"placement-db-create-cg57h\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " pod="openstack/placement-db-create-cg57h" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.034175 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8l7\" (UniqueName: \"kubernetes.io/projected/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-kube-api-access-tf8l7\") pod \"placement-db-create-cg57h\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " pod="openstack/placement-db-create-cg57h" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.095177 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bdn46"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.096234 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.105552 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bdn46"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.108403 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cg57h" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.110172 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-operator-scripts\") pod \"placement-d54a-account-create-update-zv67x\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.110330 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvg7\" (UniqueName: \"kubernetes.io/projected/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-kube-api-access-gqvg7\") pod \"placement-d54a-account-create-update-zv67x\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.111868 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-operator-scripts\") pod \"placement-d54a-account-create-update-zv67x\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.129746 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvg7\" (UniqueName: \"kubernetes.io/projected/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-kube-api-access-gqvg7\") pod \"placement-d54a-account-create-update-zv67x\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.190069 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a653-account-create-update-ggqvv"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.191045 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.201104 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.213251 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-operator-scripts\") pod \"glance-db-create-bdn46\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.213348 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqw24\" (UniqueName: \"kubernetes.io/projected/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-kube-api-access-jqw24\") pod \"glance-db-create-bdn46\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.219121 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a653-account-create-update-ggqvv"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.315206 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dm77\" (UniqueName: \"kubernetes.io/projected/8626f341-0124-473c-a852-58b5c4e24c0a-kube-api-access-6dm77\") pod \"glance-a653-account-create-update-ggqvv\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.315374 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-operator-scripts\") pod \"glance-db-create-bdn46\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.315394 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8626f341-0124-473c-a852-58b5c4e24c0a-operator-scripts\") pod \"glance-a653-account-create-update-ggqvv\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.315427 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqw24\" (UniqueName: \"kubernetes.io/projected/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-kube-api-access-jqw24\") pod \"glance-db-create-bdn46\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.316301 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-operator-scripts\") pod \"glance-db-create-bdn46\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.316315 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.380906 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.385780 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqw24\" (UniqueName: \"kubernetes.io/projected/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-kube-api-access-jqw24\") pod \"glance-db-create-bdn46\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.417377 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8626f341-0124-473c-a852-58b5c4e24c0a-operator-scripts\") pod \"glance-a653-account-create-update-ggqvv\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.418277 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dm77\" (UniqueName: \"kubernetes.io/projected/8626f341-0124-473c-a852-58b5c4e24c0a-kube-api-access-6dm77\") pod \"glance-a653-account-create-update-ggqvv\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.425105 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8626f341-0124-473c-a852-58b5c4e24c0a-operator-scripts\") pod \"glance-a653-account-create-update-ggqvv\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.435356 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bdn46" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.444922 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dm77\" (UniqueName: \"kubernetes.io/projected/8626f341-0124-473c-a852-58b5c4e24c0a-kube-api-access-6dm77\") pod \"glance-a653-account-create-update-ggqvv\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.508634 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a427-account-create-update-xncm7"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.515748 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-96xfx"] Jan 22 09:27:04 crc kubenswrapper[4892]: W0122 09:27:04.517917 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3562a65_6cdb_43f8_972c_32c454f33a14.slice/crio-57c20b999b2dc1c479c0fb104692d873d07d596d1f2f8d257994730a2dd87a30 WatchSource:0}: Error finding container 57c20b999b2dc1c479c0fb104692d873d07d596d1f2f8d257994730a2dd87a30: Status 404 returned error can't find the container with id 57c20b999b2dc1c479c0fb104692d873d07d596d1f2f8d257994730a2dd87a30 Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.541879 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.722632 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cg57h"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.834733 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d54a-account-create-update-zv67x"] Jan 22 09:27:04 crc kubenswrapper[4892]: I0122 09:27:04.987811 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bdn46"] Jan 22 09:27:05 crc kubenswrapper[4892]: W0122 09:27:05.005558 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44bfbab6_2053_4a28_a9e3_01c8bf6e8cc1.slice/crio-cc704b578fe2e45bf6edf49549ffe5c9e09f3a1be2a0ecc178a9c3e1dc93d8c9 WatchSource:0}: Error finding container cc704b578fe2e45bf6edf49549ffe5c9e09f3a1be2a0ecc178a9c3e1dc93d8c9: Status 404 returned error can't find the container with id cc704b578fe2e45bf6edf49549ffe5c9e09f3a1be2a0ecc178a9c3e1dc93d8c9 Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.154452 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a653-account-create-update-ggqvv"] Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.268768 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a653-account-create-update-ggqvv" event={"ID":"8626f341-0124-473c-a852-58b5c4e24c0a","Type":"ContainerStarted","Data":"71a063895421e09562d33ff95a6fe4b90870f0f7bb0e5254eaeb2c1426176e5b"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.271004 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cg57h" event={"ID":"1deb2fb8-da6b-4557-bac8-48ad0bc42e52","Type":"ContainerStarted","Data":"ebfd97177e96052e5f34b6a1e08132d32cedec6a82b2370f3f674152dad4d597"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.271060 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cg57h" event={"ID":"1deb2fb8-da6b-4557-bac8-48ad0bc42e52","Type":"ContainerStarted","Data":"cd67b940985d4a820b32088e515c550a0762ecd87dfa0b2bf7e47f8d063134a1"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.273270 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d54a-account-create-update-zv67x" event={"ID":"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d","Type":"ContainerStarted","Data":"c58ad6737c6e9d0e15931d7119af076b7652496f8a9f2552718d2db3884992b4"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.273315 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d54a-account-create-update-zv67x" event={"ID":"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d","Type":"ContainerStarted","Data":"7a0ccfa14b8cf6b63388697ab1a47f722ae7a5429a30bfc17723417e8a656319"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.276902 4892 generic.go:334] "Generic (PLEG): container finished" podID="e479527f-d4fd-4c96-ab6f-8c205525df26" containerID="498e6bf79a012bd2f40212aec661feade85d4753213f3fa8b6dadec92e1ca86c" exitCode=0 Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.276971 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a427-account-create-update-xncm7" event={"ID":"e479527f-d4fd-4c96-ab6f-8c205525df26","Type":"ContainerDied","Data":"498e6bf79a012bd2f40212aec661feade85d4753213f3fa8b6dadec92e1ca86c"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.276999 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a427-account-create-update-xncm7" event={"ID":"e479527f-d4fd-4c96-ab6f-8c205525df26","Type":"ContainerStarted","Data":"c4dfcd6f92eb18ea66792452c83c9813d26bbc7ddd6186c0003f95e7224227c4"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.278757 4892 generic.go:334] "Generic (PLEG): container finished" podID="b3562a65-6cdb-43f8-972c-32c454f33a14" containerID="891be5c97fdb5d7505f3036256c3dc1d789fc5311981832d41082b6be9d39af9" exitCode=0 Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.278839 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-96xfx" event={"ID":"b3562a65-6cdb-43f8-972c-32c454f33a14","Type":"ContainerDied","Data":"891be5c97fdb5d7505f3036256c3dc1d789fc5311981832d41082b6be9d39af9"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.278870 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-96xfx" event={"ID":"b3562a65-6cdb-43f8-972c-32c454f33a14","Type":"ContainerStarted","Data":"57c20b999b2dc1c479c0fb104692d873d07d596d1f2f8d257994730a2dd87a30"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.281156 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bdn46" event={"ID":"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1","Type":"ContainerStarted","Data":"4c59aaa0f642a96824fa24444ea1c48bee2b075e22415b626adce42e06642d56"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.281214 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bdn46" event={"ID":"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1","Type":"ContainerStarted","Data":"cc704b578fe2e45bf6edf49549ffe5c9e09f3a1be2a0ecc178a9c3e1dc93d8c9"} Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.305798 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-bdn46" podStartSLOduration=1.3057814159999999 podStartE2EDuration="1.305781416s" podCreationTimestamp="2026-01-22 09:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:05.297825045 +0000 UTC m=+995.141904108" watchObservedRunningTime="2026-01-22 09:27:05.305781416 +0000 UTC m=+995.149860479" Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.316494 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d54a-account-create-update-zv67x" podStartSLOduration=2.316443972 podStartE2EDuration="2.316443972s" podCreationTimestamp="2026-01-22 09:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:05.309799512 +0000 UTC m=+995.153878575" watchObservedRunningTime="2026-01-22 09:27:05.316443972 +0000 UTC m=+995.160523035" Jan 22 09:27:05 crc kubenswrapper[4892]: I0122 09:27:05.344132 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:27:05 crc kubenswrapper[4892]: E0122 09:27:05.344921 4892 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 09:27:05 crc kubenswrapper[4892]: E0122 09:27:05.344951 4892 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 09:27:05 crc kubenswrapper[4892]: E0122 09:27:05.345003 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift podName:d0c2888e-984a-482d-b7a3-5de66720aaf8 nodeName:}" failed. No retries permitted until 2026-01-22 09:27:13.344984296 +0000 UTC m=+1003.189063439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift") pod "swift-storage-0" (UID: "d0c2888e-984a-482d-b7a3-5de66720aaf8") : configmap "swift-ring-files" not found Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.302492 4892 generic.go:334] "Generic (PLEG): container finished" podID="44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" containerID="4c59aaa0f642a96824fa24444ea1c48bee2b075e22415b626adce42e06642d56" exitCode=0 Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.302642 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bdn46" event={"ID":"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1","Type":"ContainerDied","Data":"4c59aaa0f642a96824fa24444ea1c48bee2b075e22415b626adce42e06642d56"} Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.308389 4892 generic.go:334] "Generic (PLEG): container finished" podID="8626f341-0124-473c-a852-58b5c4e24c0a" containerID="7440878a808fa395f4ff86947cd5dfa89555459cc7a51d7a5a6a67094af2d819" exitCode=0 Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.308442 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a653-account-create-update-ggqvv" event={"ID":"8626f341-0124-473c-a852-58b5c4e24c0a","Type":"ContainerDied","Data":"7440878a808fa395f4ff86947cd5dfa89555459cc7a51d7a5a6a67094af2d819"} Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.312832 4892 generic.go:334] "Generic (PLEG): container finished" podID="1deb2fb8-da6b-4557-bac8-48ad0bc42e52" containerID="ebfd97177e96052e5f34b6a1e08132d32cedec6a82b2370f3f674152dad4d597" exitCode=0 Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.312927 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cg57h" event={"ID":"1deb2fb8-da6b-4557-bac8-48ad0bc42e52","Type":"ContainerDied","Data":"ebfd97177e96052e5f34b6a1e08132d32cedec6a82b2370f3f674152dad4d597"} Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.320518 4892 generic.go:334] "Generic (PLEG): container finished" podID="08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" containerID="c58ad6737c6e9d0e15931d7119af076b7652496f8a9f2552718d2db3884992b4" exitCode=0 Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.320586 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d54a-account-create-update-zv67x" event={"ID":"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d","Type":"ContainerDied","Data":"c58ad6737c6e9d0e15931d7119af076b7652496f8a9f2552718d2db3884992b4"} Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.342951 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.464778 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.580960 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc7cb"] Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.739446 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.798865 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-z4lg2"] Jan 22 09:27:06 crc kubenswrapper[4892]: I0122 09:27:06.799082 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerName="dnsmasq-dns" containerID="cri-o://3b557135410561259b57f87f4281ffc1b70b4be82060e3bd7ac7f4fd2e0bb37b" gracePeriod=10 Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.015417 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.029936 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cg57h" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.048970 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.092817 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4hxs\" (UniqueName: \"kubernetes.io/projected/e479527f-d4fd-4c96-ab6f-8c205525df26-kube-api-access-s4hxs\") pod \"e479527f-d4fd-4c96-ab6f-8c205525df26\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.092968 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e479527f-d4fd-4c96-ab6f-8c205525df26-operator-scripts\") pod \"e479527f-d4fd-4c96-ab6f-8c205525df26\" (UID: \"e479527f-d4fd-4c96-ab6f-8c205525df26\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.093041 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf8l7\" (UniqueName: \"kubernetes.io/projected/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-kube-api-access-tf8l7\") pod \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.093076 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3562a65-6cdb-43f8-972c-32c454f33a14-operator-scripts\") pod \"b3562a65-6cdb-43f8-972c-32c454f33a14\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.093142 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-operator-scripts\") pod \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\" (UID: \"1deb2fb8-da6b-4557-bac8-48ad0bc42e52\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.093202 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxd7h\" (UniqueName: \"kubernetes.io/projected/b3562a65-6cdb-43f8-972c-32c454f33a14-kube-api-access-bxd7h\") pod \"b3562a65-6cdb-43f8-972c-32c454f33a14\" (UID: \"b3562a65-6cdb-43f8-972c-32c454f33a14\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.093607 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e479527f-d4fd-4c96-ab6f-8c205525df26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e479527f-d4fd-4c96-ab6f-8c205525df26" (UID: "e479527f-d4fd-4c96-ab6f-8c205525df26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.094100 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1deb2fb8-da6b-4557-bac8-48ad0bc42e52" (UID: "1deb2fb8-da6b-4557-bac8-48ad0bc42e52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.094093 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3562a65-6cdb-43f8-972c-32c454f33a14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3562a65-6cdb-43f8-972c-32c454f33a14" (UID: "b3562a65-6cdb-43f8-972c-32c454f33a14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.101499 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-kube-api-access-tf8l7" (OuterVolumeSpecName: "kube-api-access-tf8l7") pod "1deb2fb8-da6b-4557-bac8-48ad0bc42e52" (UID: "1deb2fb8-da6b-4557-bac8-48ad0bc42e52"). InnerVolumeSpecName "kube-api-access-tf8l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.103012 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e479527f-d4fd-4c96-ab6f-8c205525df26-kube-api-access-s4hxs" (OuterVolumeSpecName: "kube-api-access-s4hxs") pod "e479527f-d4fd-4c96-ab6f-8c205525df26" (UID: "e479527f-d4fd-4c96-ab6f-8c205525df26"). InnerVolumeSpecName "kube-api-access-s4hxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.115632 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3562a65-6cdb-43f8-972c-32c454f33a14-kube-api-access-bxd7h" (OuterVolumeSpecName: "kube-api-access-bxd7h") pod "b3562a65-6cdb-43f8-972c-32c454f33a14" (UID: "b3562a65-6cdb-43f8-972c-32c454f33a14"). InnerVolumeSpecName "kube-api-access-bxd7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.198298 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4hxs\" (UniqueName: \"kubernetes.io/projected/e479527f-d4fd-4c96-ab6f-8c205525df26-kube-api-access-s4hxs\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.198336 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e479527f-d4fd-4c96-ab6f-8c205525df26-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.198346 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf8l7\" (UniqueName: \"kubernetes.io/projected/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-kube-api-access-tf8l7\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.198354 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3562a65-6cdb-43f8-972c-32c454f33a14-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.198362 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1deb2fb8-da6b-4557-bac8-48ad0bc42e52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.198420 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxd7h\" (UniqueName: \"kubernetes.io/projected/b3562a65-6cdb-43f8-972c-32c454f33a14-kube-api-access-bxd7h\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.329672 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cg57h" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.329738 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cg57h" event={"ID":"1deb2fb8-da6b-4557-bac8-48ad0bc42e52","Type":"ContainerDied","Data":"cd67b940985d4a820b32088e515c550a0762ecd87dfa0b2bf7e47f8d063134a1"} Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.329775 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd67b940985d4a820b32088e515c550a0762ecd87dfa0b2bf7e47f8d063134a1" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.339799 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a427-account-create-update-xncm7" event={"ID":"e479527f-d4fd-4c96-ab6f-8c205525df26","Type":"ContainerDied","Data":"c4dfcd6f92eb18ea66792452c83c9813d26bbc7ddd6186c0003f95e7224227c4"} Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.339849 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4dfcd6f92eb18ea66792452c83c9813d26bbc7ddd6186c0003f95e7224227c4" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.339903 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a427-account-create-update-xncm7" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.342347 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-96xfx" event={"ID":"b3562a65-6cdb-43f8-972c-32c454f33a14","Type":"ContainerDied","Data":"57c20b999b2dc1c479c0fb104692d873d07d596d1f2f8d257994730a2dd87a30"} Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.342390 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c20b999b2dc1c479c0fb104692d873d07d596d1f2f8d257994730a2dd87a30" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.342445 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-96xfx" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.345831 4892 generic.go:334] "Generic (PLEG): container finished" podID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerID="3b557135410561259b57f87f4281ffc1b70b4be82060e3bd7ac7f4fd2e0bb37b" exitCode=0 Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.345984 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" event={"ID":"d4ebb0c2-1567-423e-ac41-69ffffbe396e","Type":"ContainerDied","Data":"3b557135410561259b57f87f4281ffc1b70b4be82060e3bd7ac7f4fd2e0bb37b"} Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.406452 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.510410 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-config\") pod \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.510470 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-nb\") pod \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.510525 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-sb\") pod \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.510540 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-dns-svc\") pod \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.510724 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfjg5\" (UniqueName: \"kubernetes.io/projected/d4ebb0c2-1567-423e-ac41-69ffffbe396e-kube-api-access-bfjg5\") pod \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\" (UID: \"d4ebb0c2-1567-423e-ac41-69ffffbe396e\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.519897 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ebb0c2-1567-423e-ac41-69ffffbe396e-kube-api-access-bfjg5" (OuterVolumeSpecName: "kube-api-access-bfjg5") pod "d4ebb0c2-1567-423e-ac41-69ffffbe396e" (UID: "d4ebb0c2-1567-423e-ac41-69ffffbe396e"). InnerVolumeSpecName "kube-api-access-bfjg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.546575 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4ebb0c2-1567-423e-ac41-69ffffbe396e" (UID: "d4ebb0c2-1567-423e-ac41-69ffffbe396e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.547885 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4ebb0c2-1567-423e-ac41-69ffffbe396e" (UID: "d4ebb0c2-1567-423e-ac41-69ffffbe396e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.563332 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4ebb0c2-1567-423e-ac41-69ffffbe396e" (UID: "d4ebb0c2-1567-423e-ac41-69ffffbe396e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.578123 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-config" (OuterVolumeSpecName: "config") pod "d4ebb0c2-1567-423e-ac41-69ffffbe396e" (UID: "d4ebb0c2-1567-423e-ac41-69ffffbe396e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.612331 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfjg5\" (UniqueName: \"kubernetes.io/projected/d4ebb0c2-1567-423e-ac41-69ffffbe396e-kube-api-access-bfjg5\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.612355 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.612365 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.612373 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.612382 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4ebb0c2-1567-423e-ac41-69ffffbe396e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.707518 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.777216 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bdn46" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.818244 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dm77\" (UniqueName: \"kubernetes.io/projected/8626f341-0124-473c-a852-58b5c4e24c0a-kube-api-access-6dm77\") pod \"8626f341-0124-473c-a852-58b5c4e24c0a\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.818326 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqw24\" (UniqueName: \"kubernetes.io/projected/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-kube-api-access-jqw24\") pod \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.818481 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8626f341-0124-473c-a852-58b5c4e24c0a-operator-scripts\") pod \"8626f341-0124-473c-a852-58b5c4e24c0a\" (UID: \"8626f341-0124-473c-a852-58b5c4e24c0a\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.818534 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-operator-scripts\") pod \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\" (UID: \"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.819583 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" (UID: "44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.820820 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8626f341-0124-473c-a852-58b5c4e24c0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8626f341-0124-473c-a852-58b5c4e24c0a" (UID: "8626f341-0124-473c-a852-58b5c4e24c0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.824029 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-kube-api-access-jqw24" (OuterVolumeSpecName: "kube-api-access-jqw24") pod "44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" (UID: "44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1"). InnerVolumeSpecName "kube-api-access-jqw24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.831217 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8626f341-0124-473c-a852-58b5c4e24c0a-kube-api-access-6dm77" (OuterVolumeSpecName: "kube-api-access-6dm77") pod "8626f341-0124-473c-a852-58b5c4e24c0a" (UID: "8626f341-0124-473c-a852-58b5c4e24c0a"). InnerVolumeSpecName "kube-api-access-6dm77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.887009 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.919832 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqvg7\" (UniqueName: \"kubernetes.io/projected/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-kube-api-access-gqvg7\") pod \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.919933 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-operator-scripts\") pod \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\" (UID: \"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d\") " Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.920385 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dm77\" (UniqueName: \"kubernetes.io/projected/8626f341-0124-473c-a852-58b5c4e24c0a-kube-api-access-6dm77\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.920400 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqw24\" (UniqueName: \"kubernetes.io/projected/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-kube-api-access-jqw24\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.920411 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8626f341-0124-473c-a852-58b5c4e24c0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.920421 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.920501 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" (UID: "08202c5a-0fd3-454b-b3e8-fe19c0abfb1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:07 crc kubenswrapper[4892]: I0122 09:27:07.929066 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-kube-api-access-gqvg7" (OuterVolumeSpecName: "kube-api-access-gqvg7") pod "08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" (UID: "08202c5a-0fd3-454b-b3e8-fe19c0abfb1d"). InnerVolumeSpecName "kube-api-access-gqvg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.021529 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.021555 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqvg7\" (UniqueName: \"kubernetes.io/projected/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d-kube-api-access-gqvg7\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.354198 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d54a-account-create-update-zv67x" event={"ID":"08202c5a-0fd3-454b-b3e8-fe19c0abfb1d","Type":"ContainerDied","Data":"7a0ccfa14b8cf6b63388697ab1a47f722ae7a5429a30bfc17723417e8a656319"} Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.354215 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d54a-account-create-update-zv67x" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.354309 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a0ccfa14b8cf6b63388697ab1a47f722ae7a5429a30bfc17723417e8a656319" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.355423 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bdn46" event={"ID":"44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1","Type":"ContainerDied","Data":"cc704b578fe2e45bf6edf49549ffe5c9e09f3a1be2a0ecc178a9c3e1dc93d8c9"} Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.355472 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc704b578fe2e45bf6edf49549ffe5c9e09f3a1be2a0ecc178a9c3e1dc93d8c9" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.355544 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bdn46" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.357459 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" event={"ID":"d4ebb0c2-1567-423e-ac41-69ffffbe396e","Type":"ContainerDied","Data":"fac7fd19e27bbaad05487f60f4a6df07f7883d3b17e02bfcddc0bc1185d40a36"} Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.357521 4892 scope.go:117] "RemoveContainer" containerID="3b557135410561259b57f87f4281ffc1b70b4be82060e3bd7ac7f4fd2e0bb37b" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.357696 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-z4lg2" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.359899 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a653-account-create-update-ggqvv" event={"ID":"8626f341-0124-473c-a852-58b5c4e24c0a","Type":"ContainerDied","Data":"71a063895421e09562d33ff95a6fe4b90870f0f7bb0e5254eaeb2c1426176e5b"} Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.359924 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a063895421e09562d33ff95a6fe4b90870f0f7bb0e5254eaeb2c1426176e5b" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.359976 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a653-account-create-update-ggqvv" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.360151 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zc7cb" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="registry-server" containerID="cri-o://6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539" gracePeriod=2 Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.376806 4892 scope.go:117] "RemoveContainer" containerID="2ffa7b251a3198211c0ae18c7b25bf7dfeab8aea1928c22611be30cce719f2bf" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.545741 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-z4lg2"] Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.555385 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-z4lg2"] Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.888780 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.945928 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-utilities\") pod \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.946060 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgsqv\" (UniqueName: \"kubernetes.io/projected/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-kube-api-access-tgsqv\") pod \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.946097 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-catalog-content\") pod \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\" (UID: \"2f3af1ce-0cf1-450b-a417-698b7f2f6ace\") " Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.946960 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-utilities" (OuterVolumeSpecName: "utilities") pod "2f3af1ce-0cf1-450b-a417-698b7f2f6ace" (UID: "2f3af1ce-0cf1-450b-a417-698b7f2f6ace"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.959981 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-kube-api-access-tgsqv" (OuterVolumeSpecName: "kube-api-access-tgsqv") pod "2f3af1ce-0cf1-450b-a417-698b7f2f6ace" (UID: "2f3af1ce-0cf1-450b-a417-698b7f2f6ace"). InnerVolumeSpecName "kube-api-access-tgsqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:08 crc kubenswrapper[4892]: I0122 09:27:08.972591 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f3af1ce-0cf1-450b-a417-698b7f2f6ace" (UID: "2f3af1ce-0cf1-450b-a417-698b7f2f6ace"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.048327 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.048356 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgsqv\" (UniqueName: \"kubernetes.io/projected/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-kube-api-access-tgsqv\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.048366 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3af1ce-0cf1-450b-a417-698b7f2f6ace-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365251 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nhpjt"] Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365650 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerName="dnsmasq-dns" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365669 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerName="dnsmasq-dns" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365684 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365692 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365709 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="registry-server" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365717 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="registry-server" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365731 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365739 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365755 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerName="init" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365763 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerName="init" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365776 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e479527f-d4fd-4c96-ab6f-8c205525df26" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365784 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e479527f-d4fd-4c96-ab6f-8c205525df26" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365797 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8626f341-0124-473c-a852-58b5c4e24c0a" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365807 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8626f341-0124-473c-a852-58b5c4e24c0a" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365819 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3562a65-6cdb-43f8-972c-32c454f33a14" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365829 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3562a65-6cdb-43f8-972c-32c454f33a14" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365848 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="extract-utilities" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365858 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="extract-utilities" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365866 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1deb2fb8-da6b-4557-bac8-48ad0bc42e52" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365875 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1deb2fb8-da6b-4557-bac8-48ad0bc42e52" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.365892 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="extract-content" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.365899 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="extract-content" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366071 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366109 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerName="registry-server" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366122 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3562a65-6cdb-43f8-972c-32c454f33a14" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366130 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" containerName="dnsmasq-dns" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366143 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e479527f-d4fd-4c96-ab6f-8c205525df26" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366155 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366169 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="8626f341-0124-473c-a852-58b5c4e24c0a" containerName="mariadb-account-create-update" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366192 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="1deb2fb8-da6b-4557-bac8-48ad0bc42e52" containerName="mariadb-database-create" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.366839 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.369734 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.369921 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-w5vlr" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.370334 4892 generic.go:334] "Generic (PLEG): container finished" podID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" containerID="6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539" exitCode=0 Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.370421 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc7cb" event={"ID":"2f3af1ce-0cf1-450b-a417-698b7f2f6ace","Type":"ContainerDied","Data":"6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539"} Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.370436 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc7cb" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.370448 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc7cb" event={"ID":"2f3af1ce-0cf1-450b-a417-698b7f2f6ace","Type":"ContainerDied","Data":"02b574edde9b0e718c121b5236d8901130b0133ae72449208288c75ee2d0a07b"} Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.370467 4892 scope.go:117] "RemoveContainer" containerID="6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.382119 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nhpjt"] Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.437076 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ebb0c2-1567-423e-ac41-69ffffbe396e" path="/var/lib/kubelet/pods/d4ebb0c2-1567-423e-ac41-69ffffbe396e/volumes" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.439684 4892 scope.go:117] "RemoveContainer" containerID="be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.450388 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-f8rll" podUID="91fb6665-4bf4-4558-abf7-788627c34a1c" containerName="ovn-controller" probeResult="failure" output=< Jan 22 09:27:09 crc kubenswrapper[4892]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 09:27:09 crc kubenswrapper[4892]: > Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.450848 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc7cb"] Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.454224 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-db-sync-config-data\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.454578 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-config-data\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.454733 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-combined-ca-bundle\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.454897 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nk97\" (UniqueName: \"kubernetes.io/projected/f2c286dd-c33b-453a-abdd-90baff8ff466-kube-api-access-7nk97\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.457496 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc7cb"] Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.467876 4892 scope.go:117] "RemoveContainer" containerID="973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.492863 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.498352 4892 scope.go:117] "RemoveContainer" containerID="6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.500002 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539\": container with ID starting with 6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539 not found: ID does not exist" containerID="6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.500025 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539"} err="failed to get container status \"6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539\": rpc error: code = NotFound desc = could not find container \"6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539\": container with ID starting with 6b3d242fec82537ccb5d0f93edea5b53dd83965408da69ac248efdd090b9f539 not found: ID does not exist" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.500044 4892 scope.go:117] "RemoveContainer" containerID="be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.500372 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d\": container with ID starting with be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d not found: ID does not exist" containerID="be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.500390 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d"} err="failed to get container status \"be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d\": rpc error: code = NotFound desc = could not find container \"be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d\": container with ID starting with be51e2cf04dffae540db99487d6b12cbb1f6333fbd0913b4f1c196402521694d not found: ID does not exist" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.500402 4892 scope.go:117] "RemoveContainer" containerID="973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42" Jan 22 09:27:09 crc kubenswrapper[4892]: E0122 09:27:09.500596 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42\": container with ID starting with 973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42 not found: ID does not exist" containerID="973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.500616 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42"} err="failed to get container status \"973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42\": rpc error: code = NotFound desc = could not find container \"973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42\": container with ID starting with 973c3eae5adf1cc66a851c86d4945470f8d54c7fbce30e4e7e41b3905701ce42 not found: ID does not exist" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.521459 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-snr9q" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.556552 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-combined-ca-bundle\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.556632 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nk97\" (UniqueName: \"kubernetes.io/projected/f2c286dd-c33b-453a-abdd-90baff8ff466-kube-api-access-7nk97\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.557407 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-db-sync-config-data\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.557505 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-config-data\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.562056 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-combined-ca-bundle\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.562209 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-db-sync-config-data\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.562482 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-config-data\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.578578 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nk97\" (UniqueName: \"kubernetes.io/projected/f2c286dd-c33b-453a-abdd-90baff8ff466-kube-api-access-7nk97\") pod \"glance-db-sync-nhpjt\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.724789 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f8rll-config-xslt5"] Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.725739 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.729036 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.739194 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f8rll-config-xslt5"] Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.752799 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.764742 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-scripts\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.764952 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run-ovn\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.765018 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hlx\" (UniqueName: \"kubernetes.io/projected/60acebc8-cbc3-48f5-9a95-e6486392d7f0-kube-api-access-v6hlx\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.765056 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-log-ovn\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.765078 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-additional-scripts\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.765131 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.808973 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.867035 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run-ovn\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.867116 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6hlx\" (UniqueName: \"kubernetes.io/projected/60acebc8-cbc3-48f5-9a95-e6486392d7f0-kube-api-access-v6hlx\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.867160 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-log-ovn\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.867178 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-additional-scripts\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.867259 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.867535 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run-ovn\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.868143 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-additional-scripts\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.868217 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-log-ovn\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.868252 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-scripts\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.868589 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.869990 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-scripts\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:09 crc kubenswrapper[4892]: I0122 09:27:09.884235 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6hlx\" (UniqueName: \"kubernetes.io/projected/60acebc8-cbc3-48f5-9a95-e6486392d7f0-kube-api-access-v6hlx\") pod \"ovn-controller-f8rll-config-xslt5\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.047606 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:10 crc kubenswrapper[4892]: W0122 09:27:10.345196 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2c286dd_c33b_453a_abdd_90baff8ff466.slice/crio-4894d2fd86870d408af43a572cc379ce6ed906d1de3faa2382d9e90f7ae1b44e WatchSource:0}: Error finding container 4894d2fd86870d408af43a572cc379ce6ed906d1de3faa2382d9e90f7ae1b44e: Status 404 returned error can't find the container with id 4894d2fd86870d408af43a572cc379ce6ed906d1de3faa2382d9e90f7ae1b44e Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.345310 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nhpjt"] Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.381748 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nhpjt" event={"ID":"f2c286dd-c33b-453a-abdd-90baff8ff466","Type":"ContainerStarted","Data":"4894d2fd86870d408af43a572cc379ce6ed906d1de3faa2382d9e90f7ae1b44e"} Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.383732 4892 generic.go:334] "Generic (PLEG): container finished" podID="b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" containerID="ea856c436fe317b662068ef7d89a2247160ce86b17545c6d111ab974808fdb58" exitCode=0 Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.383774 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9lwk" event={"ID":"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc","Type":"ContainerDied","Data":"ea856c436fe317b662068ef7d89a2247160ce86b17545c6d111ab974808fdb58"} Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.613748 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f8rll-config-xslt5"] Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.994471 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f2tnj"] Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.996355 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:10 crc kubenswrapper[4892]: I0122 09:27:10.998676 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.049363 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2tnj"] Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.095853 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dfj\" (UniqueName: \"kubernetes.io/projected/97dadffc-3fe4-4378-ada6-72b8262f2180-kube-api-access-s8dfj\") pod \"root-account-create-update-f2tnj\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.096267 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97dadffc-3fe4-4378-ada6-72b8262f2180-operator-scripts\") pod \"root-account-create-update-f2tnj\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.198513 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8dfj\" (UniqueName: \"kubernetes.io/projected/97dadffc-3fe4-4378-ada6-72b8262f2180-kube-api-access-s8dfj\") pod \"root-account-create-update-f2tnj\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.198801 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97dadffc-3fe4-4378-ada6-72b8262f2180-operator-scripts\") pod \"root-account-create-update-f2tnj\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.199597 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97dadffc-3fe4-4378-ada6-72b8262f2180-operator-scripts\") pod \"root-account-create-update-f2tnj\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.220664 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8dfj\" (UniqueName: \"kubernetes.io/projected/97dadffc-3fe4-4378-ada6-72b8262f2180-kube-api-access-s8dfj\") pod \"root-account-create-update-f2tnj\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.347105 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.400159 4892 generic.go:334] "Generic (PLEG): container finished" podID="60acebc8-cbc3-48f5-9a95-e6486392d7f0" containerID="6e0685451da732e1f83de3cb22ff107893159d0811e0d9ff91fbdf035a796499" exitCode=0 Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.400219 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f8rll-config-xslt5" event={"ID":"60acebc8-cbc3-48f5-9a95-e6486392d7f0","Type":"ContainerDied","Data":"6e0685451da732e1f83de3cb22ff107893159d0811e0d9ff91fbdf035a796499"} Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.401082 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f8rll-config-xslt5" event={"ID":"60acebc8-cbc3-48f5-9a95-e6486392d7f0","Type":"ContainerStarted","Data":"cc56f44fccc97dba19cece5cb4bf3ca29b09e990b9ab399b646a32dfbfdcf3ce"} Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.474704 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3af1ce-0cf1-450b-a417-698b7f2f6ace" path="/var/lib/kubelet/pods/2f3af1ce-0cf1-450b-a417-698b7f2f6ace/volumes" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.874261 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2tnj"] Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.878108 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:27:11 crc kubenswrapper[4892]: W0122 09:27:11.895268 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97dadffc_3fe4_4378_ada6_72b8262f2180.slice/crio-145af3cabeb61d77067b9b0546030cb691f908bbe2e7afe2150707a90f3377ab WatchSource:0}: Error finding container 145af3cabeb61d77067b9b0546030cb691f908bbe2e7afe2150707a90f3377ab: Status 404 returned error can't find the container with id 145af3cabeb61d77067b9b0546030cb691f908bbe2e7afe2150707a90f3377ab Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908623 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-swiftconf\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908657 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-combined-ca-bundle\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908681 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-etc-swift\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908697 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-dispersionconf\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908732 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-scripts\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908752 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4r6h\" (UniqueName: \"kubernetes.io/projected/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-kube-api-access-k4r6h\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.908767 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-ring-data-devices\") pod \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\" (UID: \"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc\") " Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.909838 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.916239 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.919013 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-kube-api-access-k4r6h" (OuterVolumeSpecName: "kube-api-access-k4r6h") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "kube-api-access-k4r6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.926358 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.937882 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.941037 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:11 crc kubenswrapper[4892]: I0122 09:27:11.949720 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-scripts" (OuterVolumeSpecName: "scripts") pod "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" (UID: "b23a9c04-b07c-4dd1-a475-7b1d70b9bddc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.011011 4892 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.011506 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.011619 4892 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.011717 4892 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.011929 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.012054 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4r6h\" (UniqueName: \"kubernetes.io/projected/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-kube-api-access-k4r6h\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.012112 4892 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b23a9c04-b07c-4dd1-a475-7b1d70b9bddc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.408260 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9lwk" event={"ID":"b23a9c04-b07c-4dd1-a475-7b1d70b9bddc","Type":"ContainerDied","Data":"95d206b4c559c1fd95f4cf99ce26863b9fb24a3c8bbd1d177c7b69922a92e53a"} Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.408311 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d206b4c559c1fd95f4cf99ce26863b9fb24a3c8bbd1d177c7b69922a92e53a" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.408369 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9lwk" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.415377 4892 generic.go:334] "Generic (PLEG): container finished" podID="97dadffc-3fe4-4378-ada6-72b8262f2180" containerID="de49466765ecf204df91f01d6efb4f0058ea655787bbe0e721d5dc73a57d6c53" exitCode=0 Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.415825 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2tnj" event={"ID":"97dadffc-3fe4-4378-ada6-72b8262f2180","Type":"ContainerDied","Data":"de49466765ecf204df91f01d6efb4f0058ea655787bbe0e721d5dc73a57d6c53"} Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.415859 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2tnj" event={"ID":"97dadffc-3fe4-4378-ada6-72b8262f2180","Type":"ContainerStarted","Data":"145af3cabeb61d77067b9b0546030cb691f908bbe2e7afe2150707a90f3377ab"} Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.740031 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.829647 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6hlx\" (UniqueName: \"kubernetes.io/projected/60acebc8-cbc3-48f5-9a95-e6486392d7f0-kube-api-access-v6hlx\") pod \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.829700 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run\") pod \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.829736 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-scripts\") pod \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.829794 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run-ovn\") pod \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.829811 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-log-ovn\") pod \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.829948 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-additional-scripts\") pod \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\" (UID: \"60acebc8-cbc3-48f5-9a95-e6486392d7f0\") " Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.830485 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "60acebc8-cbc3-48f5-9a95-e6486392d7f0" (UID: "60acebc8-cbc3-48f5-9a95-e6486392d7f0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.830536 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "60acebc8-cbc3-48f5-9a95-e6486392d7f0" (UID: "60acebc8-cbc3-48f5-9a95-e6486392d7f0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.830554 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run" (OuterVolumeSpecName: "var-run") pod "60acebc8-cbc3-48f5-9a95-e6486392d7f0" (UID: "60acebc8-cbc3-48f5-9a95-e6486392d7f0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.831176 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "60acebc8-cbc3-48f5-9a95-e6486392d7f0" (UID: "60acebc8-cbc3-48f5-9a95-e6486392d7f0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.834590 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-scripts" (OuterVolumeSpecName: "scripts") pod "60acebc8-cbc3-48f5-9a95-e6486392d7f0" (UID: "60acebc8-cbc3-48f5-9a95-e6486392d7f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.835563 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60acebc8-cbc3-48f5-9a95-e6486392d7f0-kube-api-access-v6hlx" (OuterVolumeSpecName: "kube-api-access-v6hlx") pod "60acebc8-cbc3-48f5-9a95-e6486392d7f0" (UID: "60acebc8-cbc3-48f5-9a95-e6486392d7f0"). InnerVolumeSpecName "kube-api-access-v6hlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.932548 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6hlx\" (UniqueName: \"kubernetes.io/projected/60acebc8-cbc3-48f5-9a95-e6486392d7f0-kube-api-access-v6hlx\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.932870 4892 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.932880 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.932888 4892 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.932897 4892 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/60acebc8-cbc3-48f5-9a95-e6486392d7f0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:12 crc kubenswrapper[4892]: I0122 09:27:12.932905 4892 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/60acebc8-cbc3-48f5-9a95-e6486392d7f0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.431220 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f8rll-config-xslt5" event={"ID":"60acebc8-cbc3-48f5-9a95-e6486392d7f0","Type":"ContainerDied","Data":"cc56f44fccc97dba19cece5cb4bf3ca29b09e990b9ab399b646a32dfbfdcf3ce"} Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.431270 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc56f44fccc97dba19cece5cb4bf3ca29b09e990b9ab399b646a32dfbfdcf3ce" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.431242 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f8rll-config-xslt5" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.442406 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.451496 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0c2888e-984a-482d-b7a3-5de66720aaf8-etc-swift\") pod \"swift-storage-0\" (UID: \"d0c2888e-984a-482d-b7a3-5de66720aaf8\") " pod="openstack/swift-storage-0" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.476665 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.715085 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.747456 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97dadffc-3fe4-4378-ada6-72b8262f2180-operator-scripts\") pod \"97dadffc-3fe4-4378-ada6-72b8262f2180\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.747504 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8dfj\" (UniqueName: \"kubernetes.io/projected/97dadffc-3fe4-4378-ada6-72b8262f2180-kube-api-access-s8dfj\") pod \"97dadffc-3fe4-4378-ada6-72b8262f2180\" (UID: \"97dadffc-3fe4-4378-ada6-72b8262f2180\") " Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.748216 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97dadffc-3fe4-4378-ada6-72b8262f2180-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97dadffc-3fe4-4378-ada6-72b8262f2180" (UID: "97dadffc-3fe4-4378-ada6-72b8262f2180"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.751665 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97dadffc-3fe4-4378-ada6-72b8262f2180-kube-api-access-s8dfj" (OuterVolumeSpecName: "kube-api-access-s8dfj") pod "97dadffc-3fe4-4378-ada6-72b8262f2180" (UID: "97dadffc-3fe4-4378-ada6-72b8262f2180"). InnerVolumeSpecName "kube-api-access-s8dfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.829407 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-f8rll-config-xslt5"] Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.850205 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97dadffc-3fe4-4378-ada6-72b8262f2180-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.850241 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8dfj\" (UniqueName: \"kubernetes.io/projected/97dadffc-3fe4-4378-ada6-72b8262f2180-kube-api-access-s8dfj\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:13 crc kubenswrapper[4892]: I0122 09:27:13.856058 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-f8rll-config-xslt5"] Jan 22 09:27:14 crc kubenswrapper[4892]: W0122 09:27:14.070781 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0c2888e_984a_482d_b7a3_5de66720aaf8.slice/crio-c8dbff73c342d79f1fa72310ff70e7f7a7739829f336581fa88e4b5c250c811f WatchSource:0}: Error finding container c8dbff73c342d79f1fa72310ff70e7f7a7739829f336581fa88e4b5c250c811f: Status 404 returned error can't find the container with id c8dbff73c342d79f1fa72310ff70e7f7a7739829f336581fa88e4b5c250c811f Jan 22 09:27:14 crc kubenswrapper[4892]: I0122 09:27:14.075905 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 09:27:14 crc kubenswrapper[4892]: I0122 09:27:14.435797 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-f8rll" Jan 22 09:27:14 crc kubenswrapper[4892]: I0122 09:27:14.447842 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"c8dbff73c342d79f1fa72310ff70e7f7a7739829f336581fa88e4b5c250c811f"} Jan 22 09:27:14 crc kubenswrapper[4892]: I0122 09:27:14.463795 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2tnj" event={"ID":"97dadffc-3fe4-4378-ada6-72b8262f2180","Type":"ContainerDied","Data":"145af3cabeb61d77067b9b0546030cb691f908bbe2e7afe2150707a90f3377ab"} Jan 22 09:27:14 crc kubenswrapper[4892]: I0122 09:27:14.463834 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145af3cabeb61d77067b9b0546030cb691f908bbe2e7afe2150707a90f3377ab" Jan 22 09:27:14 crc kubenswrapper[4892]: I0122 09:27:14.463890 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2tnj" Jan 22 09:27:14 crc kubenswrapper[4892]: E0122 09:27:14.532826 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97dadffc_3fe4_4378_ada6_72b8262f2180.slice\": RecentStats: unable to find data in memory cache]" Jan 22 09:27:15 crc kubenswrapper[4892]: I0122 09:27:15.427826 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60acebc8-cbc3-48f5-9a95-e6486392d7f0" path="/var/lib/kubelet/pods/60acebc8-cbc3-48f5-9a95-e6486392d7f0/volumes" Jan 22 09:27:16 crc kubenswrapper[4892]: I0122 09:27:16.323011 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:27:16 crc kubenswrapper[4892]: I0122 09:27:16.323372 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:27:16 crc kubenswrapper[4892]: I0122 09:27:16.482494 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"a4e2e0e15f5f19106349a76dc74fe8007c3e034870fe940fee83d0e85a99e084"} Jan 22 09:27:16 crc kubenswrapper[4892]: I0122 09:27:16.482543 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"8376f78f9d1ffb7f84ab688cf9e70941fd54fa4c742ba8535f806d0c375d510b"} Jan 22 09:27:16 crc kubenswrapper[4892]: I0122 09:27:16.482556 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"84b0206c403b8e2fc05c7f6e24d686abe9f8a41817239547edcef5fc664d075e"} Jan 22 09:27:16 crc kubenswrapper[4892]: I0122 09:27:16.482567 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"979e6bffdc684bc26333eb08d3da09a748b2dc22f1f23b06db8c96c70e09effb"} Jan 22 09:27:17 crc kubenswrapper[4892]: I0122 09:27:17.268338 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f2tnj"] Jan 22 09:27:17 crc kubenswrapper[4892]: I0122 09:27:17.277353 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f2tnj"] Jan 22 09:27:17 crc kubenswrapper[4892]: I0122 09:27:17.446262 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97dadffc-3fe4-4378-ada6-72b8262f2180" path="/var/lib/kubelet/pods/97dadffc-3fe4-4378-ada6-72b8262f2180/volumes" Jan 22 09:27:19 crc kubenswrapper[4892]: I0122 09:27:19.515997 4892 generic.go:334] "Generic (PLEG): container finished" podID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerID="9e7e5c6b61a197138f10c0535d765b391add3e2e041ba0cdac992275d295244f" exitCode=0 Jan 22 09:27:19 crc kubenswrapper[4892]: I0122 09:27:19.516094 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3106222-75cd-4011-a7d0-33a3d39e3f0c","Type":"ContainerDied","Data":"9e7e5c6b61a197138f10c0535d765b391add3e2e041ba0cdac992275d295244f"} Jan 22 09:27:19 crc kubenswrapper[4892]: I0122 09:27:19.521009 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerID="5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c" exitCode=0 Jan 22 09:27:19 crc kubenswrapper[4892]: I0122 09:27:19.521058 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f","Type":"ContainerDied","Data":"5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c"} Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.282806 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lgcpd"] Jan 22 09:27:22 crc kubenswrapper[4892]: E0122 09:27:22.284092 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60acebc8-cbc3-48f5-9a95-e6486392d7f0" containerName="ovn-config" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.284112 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="60acebc8-cbc3-48f5-9a95-e6486392d7f0" containerName="ovn-config" Jan 22 09:27:22 crc kubenswrapper[4892]: E0122 09:27:22.284129 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" containerName="swift-ring-rebalance" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.284137 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" containerName="swift-ring-rebalance" Jan 22 09:27:22 crc kubenswrapper[4892]: E0122 09:27:22.284166 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dadffc-3fe4-4378-ada6-72b8262f2180" containerName="mariadb-account-create-update" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.284175 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dadffc-3fe4-4378-ada6-72b8262f2180" containerName="mariadb-account-create-update" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.284375 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="97dadffc-3fe4-4378-ada6-72b8262f2180" containerName="mariadb-account-create-update" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.284400 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23a9c04-b07c-4dd1-a475-7b1d70b9bddc" containerName="swift-ring-rebalance" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.284414 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="60acebc8-cbc3-48f5-9a95-e6486392d7f0" containerName="ovn-config" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.285149 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.294840 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.307732 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lgcpd"] Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.323240 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46508830-6103-4f2f-b82d-9ca1fb7ae748-operator-scripts\") pod \"root-account-create-update-lgcpd\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.323506 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsds\" (UniqueName: \"kubernetes.io/projected/46508830-6103-4f2f-b82d-9ca1fb7ae748-kube-api-access-pdsds\") pod \"root-account-create-update-lgcpd\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.425182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdsds\" (UniqueName: \"kubernetes.io/projected/46508830-6103-4f2f-b82d-9ca1fb7ae748-kube-api-access-pdsds\") pod \"root-account-create-update-lgcpd\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.425305 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46508830-6103-4f2f-b82d-9ca1fb7ae748-operator-scripts\") pod \"root-account-create-update-lgcpd\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.426225 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46508830-6103-4f2f-b82d-9ca1fb7ae748-operator-scripts\") pod \"root-account-create-update-lgcpd\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.446377 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdsds\" (UniqueName: \"kubernetes.io/projected/46508830-6103-4f2f-b82d-9ca1fb7ae748-kube-api-access-pdsds\") pod \"root-account-create-update-lgcpd\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:22 crc kubenswrapper[4892]: I0122 09:27:22.603730 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:23 crc kubenswrapper[4892]: I0122 09:27:23.914228 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lgcpd"] Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.559471 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3106222-75cd-4011-a7d0-33a3d39e3f0c","Type":"ContainerStarted","Data":"761cbc2ad31d8c772853deace9c46eb9472b5e14da71aff569f880d3995af45e"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.559914 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.561773 4892 generic.go:334] "Generic (PLEG): container finished" podID="46508830-6103-4f2f-b82d-9ca1fb7ae748" containerID="c76fa1524bd4bfc81dba90506da04c242fa2fc65976ed9dae44662e12297bcf0" exitCode=0 Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.561848 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lgcpd" event={"ID":"46508830-6103-4f2f-b82d-9ca1fb7ae748","Type":"ContainerDied","Data":"c76fa1524bd4bfc81dba90506da04c242fa2fc65976ed9dae44662e12297bcf0"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.561873 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lgcpd" event={"ID":"46508830-6103-4f2f-b82d-9ca1fb7ae748","Type":"ContainerStarted","Data":"dcc696fa3e50ba78c47087d13e05680e78a1dd54cd806a2a03f5da91b37736cb"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.563347 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f","Type":"ContainerStarted","Data":"0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.563547 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.566932 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"86c1de3c3e982eb793066f94c537e39a50d5b2892c63082de42343fe19116c26"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.566965 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"7cb584c962948a2c446d96e4342aba149cb16078ddd57d39930b0eae252a33f4"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.566976 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"f9be0c331fc6bf31b088f90d4d1fe5fe9a9d9b34f1271872a2c6f86a53bf1e0e"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.566986 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"410e06074617cbc5cba495e2c89e6ec1f016e45b73142168bc9e4971d74845e6"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.568713 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nhpjt" event={"ID":"f2c286dd-c33b-453a-abdd-90baff8ff466","Type":"ContainerStarted","Data":"2008da1a8f75c2437ac912466a1ed4042821a1f7e816d3ca5960958d162ae924"} Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.595868 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.296585439 podStartE2EDuration="1m25.595842133s" podCreationTimestamp="2026-01-22 09:25:59 +0000 UTC" firstStartedPulling="2026-01-22 09:26:02.19527037 +0000 UTC m=+932.039349433" lastFinishedPulling="2026-01-22 09:26:45.494527064 +0000 UTC m=+975.338606127" observedRunningTime="2026-01-22 09:27:24.589686405 +0000 UTC m=+1014.433765468" watchObservedRunningTime="2026-01-22 09:27:24.595842133 +0000 UTC m=+1014.439921196" Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.636052 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.324891623 podStartE2EDuration="1m25.636028176s" podCreationTimestamp="2026-01-22 09:25:59 +0000 UTC" firstStartedPulling="2026-01-22 09:26:02.266472773 +0000 UTC m=+932.110551836" lastFinishedPulling="2026-01-22 09:26:45.577609316 +0000 UTC m=+975.421688389" observedRunningTime="2026-01-22 09:27:24.626856576 +0000 UTC m=+1014.470935649" watchObservedRunningTime="2026-01-22 09:27:24.636028176 +0000 UTC m=+1014.480107239" Jan 22 09:27:24 crc kubenswrapper[4892]: I0122 09:27:24.659454 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nhpjt" podStartSLOduration=2.506959429 podStartE2EDuration="15.659432218s" podCreationTimestamp="2026-01-22 09:27:09 +0000 UTC" firstStartedPulling="2026-01-22 09:27:10.348030239 +0000 UTC m=+1000.192109302" lastFinishedPulling="2026-01-22 09:27:23.500503008 +0000 UTC m=+1013.344582091" observedRunningTime="2026-01-22 09:27:24.655534754 +0000 UTC m=+1014.499613827" watchObservedRunningTime="2026-01-22 09:27:24.659432218 +0000 UTC m=+1014.503511281" Jan 22 09:27:25 crc kubenswrapper[4892]: I0122 09:27:25.582469 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"e961638f8cafe585ffc58baa808c73d9298df2083262d52384eaa6a1ee9e73fb"} Jan 22 09:27:25 crc kubenswrapper[4892]: I0122 09:27:25.881352 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:25 crc kubenswrapper[4892]: I0122 09:27:25.908087 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdsds\" (UniqueName: \"kubernetes.io/projected/46508830-6103-4f2f-b82d-9ca1fb7ae748-kube-api-access-pdsds\") pod \"46508830-6103-4f2f-b82d-9ca1fb7ae748\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " Jan 22 09:27:25 crc kubenswrapper[4892]: I0122 09:27:25.908232 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46508830-6103-4f2f-b82d-9ca1fb7ae748-operator-scripts\") pod \"46508830-6103-4f2f-b82d-9ca1fb7ae748\" (UID: \"46508830-6103-4f2f-b82d-9ca1fb7ae748\") " Jan 22 09:27:25 crc kubenswrapper[4892]: I0122 09:27:25.908751 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46508830-6103-4f2f-b82d-9ca1fb7ae748-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46508830-6103-4f2f-b82d-9ca1fb7ae748" (UID: "46508830-6103-4f2f-b82d-9ca1fb7ae748"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:25 crc kubenswrapper[4892]: I0122 09:27:25.913583 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46508830-6103-4f2f-b82d-9ca1fb7ae748-kube-api-access-pdsds" (OuterVolumeSpecName: "kube-api-access-pdsds") pod "46508830-6103-4f2f-b82d-9ca1fb7ae748" (UID: "46508830-6103-4f2f-b82d-9ca1fb7ae748"). InnerVolumeSpecName "kube-api-access-pdsds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.010628 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46508830-6103-4f2f-b82d-9ca1fb7ae748-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.011006 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdsds\" (UniqueName: \"kubernetes.io/projected/46508830-6103-4f2f-b82d-9ca1fb7ae748-kube-api-access-pdsds\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.590801 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lgcpd" event={"ID":"46508830-6103-4f2f-b82d-9ca1fb7ae748","Type":"ContainerDied","Data":"dcc696fa3e50ba78c47087d13e05680e78a1dd54cd806a2a03f5da91b37736cb"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.590830 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc696fa3e50ba78c47087d13e05680e78a1dd54cd806a2a03f5da91b37736cb" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.590884 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lgcpd" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.601433 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"21d4533de216bd1aff0904e8756c9a38960effd9acd1e70bc8c9f0953003697a"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.601471 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"0998959433ab5a4fc31514561bab29dfa263796ae6d918bbba1bdff1e808ce47"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.601493 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"7807ccfc0f6b76e72609aa91431687a2980fd1da6715fea20226bfe216adf269"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.601506 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"1dacff24be3312b3cbf7219f1547725429519cf60ad6d67864a546d927c25b84"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.601517 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"01a19e17ca2e5f0e50bfb557e91f8fc2c93984adea0f364d6eddb9ddbe478d35"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.601526 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0c2888e-984a-482d-b7a3-5de66720aaf8","Type":"ContainerStarted","Data":"9e520cde96e09ab5be9962fe98466c2296ff0a3f8976f62e610bbe43d149405f"} Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.638663 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.363456479 podStartE2EDuration="30.638646095s" podCreationTimestamp="2026-01-22 09:26:56 +0000 UTC" firstStartedPulling="2026-01-22 09:27:14.072687678 +0000 UTC m=+1003.916766741" lastFinishedPulling="2026-01-22 09:27:25.347877294 +0000 UTC m=+1015.191956357" observedRunningTime="2026-01-22 09:27:26.633949122 +0000 UTC m=+1016.478028185" watchObservedRunningTime="2026-01-22 09:27:26.638646095 +0000 UTC m=+1016.482725158" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.914040 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-4t885"] Jan 22 09:27:26 crc kubenswrapper[4892]: E0122 09:27:26.914659 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46508830-6103-4f2f-b82d-9ca1fb7ae748" containerName="mariadb-account-create-update" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.914675 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="46508830-6103-4f2f-b82d-9ca1fb7ae748" containerName="mariadb-account-create-update" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.914812 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="46508830-6103-4f2f-b82d-9ca1fb7ae748" containerName="mariadb-account-create-update" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.915622 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.917356 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.927809 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.927861 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhr5\" (UniqueName: \"kubernetes.io/projected/3c0e04d1-d91d-4fc2-94ec-c46489638389-kube-api-access-hwhr5\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.927884 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.927901 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.927926 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.928182 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-config\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:26 crc kubenswrapper[4892]: I0122 09:27:26.977101 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-4t885"] Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.029849 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.029889 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhr5\" (UniqueName: \"kubernetes.io/projected/3c0e04d1-d91d-4fc2-94ec-c46489638389-kube-api-access-hwhr5\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.029909 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.029924 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.029942 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.029986 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-config\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.030958 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-config\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.031037 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.032840 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.033823 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.034021 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.051940 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhr5\" (UniqueName: \"kubernetes.io/projected/3c0e04d1-d91d-4fc2-94ec-c46489638389-kube-api-access-hwhr5\") pod \"dnsmasq-dns-8467b54bcc-4t885\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.231534 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:27 crc kubenswrapper[4892]: I0122 09:27:27.766060 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-4t885"] Jan 22 09:27:27 crc kubenswrapper[4892]: W0122 09:27:27.780548 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0e04d1_d91d_4fc2_94ec_c46489638389.slice/crio-85a9164450019afedf522b6d2b389b3a4b489b82cfb7b9337f345be9ba5156a6 WatchSource:0}: Error finding container 85a9164450019afedf522b6d2b389b3a4b489b82cfb7b9337f345be9ba5156a6: Status 404 returned error can't find the container with id 85a9164450019afedf522b6d2b389b3a4b489b82cfb7b9337f345be9ba5156a6 Jan 22 09:27:28 crc kubenswrapper[4892]: I0122 09:27:28.617018 4892 generic.go:334] "Generic (PLEG): container finished" podID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerID="19b2fc53b36e767062f5642dc040438b099f2c4499939b28053a6a3f31123a66" exitCode=0 Jan 22 09:27:28 crc kubenswrapper[4892]: I0122 09:27:28.617091 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" event={"ID":"3c0e04d1-d91d-4fc2-94ec-c46489638389","Type":"ContainerDied","Data":"19b2fc53b36e767062f5642dc040438b099f2c4499939b28053a6a3f31123a66"} Jan 22 09:27:28 crc kubenswrapper[4892]: I0122 09:27:28.617368 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" event={"ID":"3c0e04d1-d91d-4fc2-94ec-c46489638389","Type":"ContainerStarted","Data":"85a9164450019afedf522b6d2b389b3a4b489b82cfb7b9337f345be9ba5156a6"} Jan 22 09:27:29 crc kubenswrapper[4892]: I0122 09:27:29.626981 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" event={"ID":"3c0e04d1-d91d-4fc2-94ec-c46489638389","Type":"ContainerStarted","Data":"8d6a1603fa048ebe81c7e90620e669965feae9a26f1ed44ed70ad28bd5eb6eca"} Jan 22 09:27:29 crc kubenswrapper[4892]: I0122 09:27:29.627683 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:29 crc kubenswrapper[4892]: I0122 09:27:29.648436 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" podStartSLOduration=3.648411462 podStartE2EDuration="3.648411462s" podCreationTimestamp="2026-01-22 09:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:29.642417999 +0000 UTC m=+1019.486497062" watchObservedRunningTime="2026-01-22 09:27:29.648411462 +0000 UTC m=+1019.492490545" Jan 22 09:27:31 crc kubenswrapper[4892]: I0122 09:27:31.644672 4892 generic.go:334] "Generic (PLEG): container finished" podID="f2c286dd-c33b-453a-abdd-90baff8ff466" containerID="2008da1a8f75c2437ac912466a1ed4042821a1f7e816d3ca5960958d162ae924" exitCode=0 Jan 22 09:27:31 crc kubenswrapper[4892]: I0122 09:27:31.644749 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nhpjt" event={"ID":"f2c286dd-c33b-453a-abdd-90baff8ff466","Type":"ContainerDied","Data":"2008da1a8f75c2437ac912466a1ed4042821a1f7e816d3ca5960958d162ae924"} Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.087176 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.253412 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-config-data\") pod \"f2c286dd-c33b-453a-abdd-90baff8ff466\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.253654 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-db-sync-config-data\") pod \"f2c286dd-c33b-453a-abdd-90baff8ff466\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.253799 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-combined-ca-bundle\") pod \"f2c286dd-c33b-453a-abdd-90baff8ff466\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.253890 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nk97\" (UniqueName: \"kubernetes.io/projected/f2c286dd-c33b-453a-abdd-90baff8ff466-kube-api-access-7nk97\") pod \"f2c286dd-c33b-453a-abdd-90baff8ff466\" (UID: \"f2c286dd-c33b-453a-abdd-90baff8ff466\") " Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.259514 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c286dd-c33b-453a-abdd-90baff8ff466-kube-api-access-7nk97" (OuterVolumeSpecName: "kube-api-access-7nk97") pod "f2c286dd-c33b-453a-abdd-90baff8ff466" (UID: "f2c286dd-c33b-453a-abdd-90baff8ff466"). InnerVolumeSpecName "kube-api-access-7nk97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.259778 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f2c286dd-c33b-453a-abdd-90baff8ff466" (UID: "f2c286dd-c33b-453a-abdd-90baff8ff466"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.286139 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2c286dd-c33b-453a-abdd-90baff8ff466" (UID: "f2c286dd-c33b-453a-abdd-90baff8ff466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.294042 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-config-data" (OuterVolumeSpecName: "config-data") pod "f2c286dd-c33b-453a-abdd-90baff8ff466" (UID: "f2c286dd-c33b-453a-abdd-90baff8ff466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.356316 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.356355 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nk97\" (UniqueName: \"kubernetes.io/projected/f2c286dd-c33b-453a-abdd-90baff8ff466-kube-api-access-7nk97\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.356370 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.356382 4892 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2c286dd-c33b-453a-abdd-90baff8ff466-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.661683 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nhpjt" event={"ID":"f2c286dd-c33b-453a-abdd-90baff8ff466","Type":"ContainerDied","Data":"4894d2fd86870d408af43a572cc379ce6ed906d1de3faa2382d9e90f7ae1b44e"} Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.661725 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4894d2fd86870d408af43a572cc379ce6ed906d1de3faa2382d9e90f7ae1b44e" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.661786 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nhpjt" Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.989214 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-4t885"] Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.989730 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="dnsmasq-dns" containerID="cri-o://8d6a1603fa048ebe81c7e90620e669965feae9a26f1ed44ed70ad28bd5eb6eca" gracePeriod=10 Jan 22 09:27:33 crc kubenswrapper[4892]: I0122 09:27:33.991502 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.045637 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-mgwb6"] Jan 22 09:27:34 crc kubenswrapper[4892]: E0122 09:27:34.045984 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c286dd-c33b-453a-abdd-90baff8ff466" containerName="glance-db-sync" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.046000 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c286dd-c33b-453a-abdd-90baff8ff466" containerName="glance-db-sync" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.046176 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c286dd-c33b-453a-abdd-90baff8ff466" containerName="glance-db-sync" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.047150 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.052221 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-mgwb6"] Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.173653 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.173709 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.173781 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.173844 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-config\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.173871 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.173897 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstm8\" (UniqueName: \"kubernetes.io/projected/7ca36c0e-2948-479f-88f8-f3ccf747bafd-kube-api-access-bstm8\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.274969 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.275018 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.275054 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.275093 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-config\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.275124 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.275149 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstm8\" (UniqueName: \"kubernetes.io/projected/7ca36c0e-2948-479f-88f8-f3ccf747bafd-kube-api-access-bstm8\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.276035 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.276131 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.276706 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.276855 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-config\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.277177 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.302416 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstm8\" (UniqueName: \"kubernetes.io/projected/7ca36c0e-2948-479f-88f8-f3ccf747bafd-kube-api-access-bstm8\") pod \"dnsmasq-dns-56c9bc6f5c-mgwb6\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.364090 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.673274 4892 generic.go:334] "Generic (PLEG): container finished" podID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerID="8d6a1603fa048ebe81c7e90620e669965feae9a26f1ed44ed70ad28bd5eb6eca" exitCode=0 Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.673316 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" event={"ID":"3c0e04d1-d91d-4fc2-94ec-c46489638389","Type":"ContainerDied","Data":"8d6a1603fa048ebe81c7e90620e669965feae9a26f1ed44ed70ad28bd5eb6eca"} Jan 22 09:27:34 crc kubenswrapper[4892]: I0122 09:27:34.840810 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-mgwb6"] Jan 22 09:27:35 crc kubenswrapper[4892]: I0122 09:27:35.681704 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" event={"ID":"7ca36c0e-2948-479f-88f8-f3ccf747bafd","Type":"ContainerStarted","Data":"7880984785d525f9701b531312c844e14a445c47ad4dda1800d07feba3bcabd2"} Jan 22 09:27:37 crc kubenswrapper[4892]: I0122 09:27:37.232253 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.629820 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.721808 4892 generic.go:334] "Generic (PLEG): container finished" podID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerID="4ab094d9cbb4d372e5035e41bf67a4c249e124956ec1719b6bf030f44d2460b8" exitCode=0 Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.721872 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" event={"ID":"7ca36c0e-2948-479f-88f8-f3ccf747bafd","Type":"ContainerDied","Data":"4ab094d9cbb4d372e5035e41bf67a4c249e124956ec1719b6bf030f44d2460b8"} Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.724516 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" event={"ID":"3c0e04d1-d91d-4fc2-94ec-c46489638389","Type":"ContainerDied","Data":"85a9164450019afedf522b6d2b389b3a4b489b82cfb7b9337f345be9ba5156a6"} Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.724556 4892 scope.go:117] "RemoveContainer" containerID="8d6a1603fa048ebe81c7e90620e669965feae9a26f1ed44ed70ad28bd5eb6eca" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.724684 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-4t885" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.800161 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-config\") pod \"3c0e04d1-d91d-4fc2-94ec-c46489638389\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.800408 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-nb\") pod \"3c0e04d1-d91d-4fc2-94ec-c46489638389\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.800446 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-svc\") pod \"3c0e04d1-d91d-4fc2-94ec-c46489638389\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.800551 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhr5\" (UniqueName: \"kubernetes.io/projected/3c0e04d1-d91d-4fc2-94ec-c46489638389-kube-api-access-hwhr5\") pod \"3c0e04d1-d91d-4fc2-94ec-c46489638389\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.800590 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-swift-storage-0\") pod \"3c0e04d1-d91d-4fc2-94ec-c46489638389\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.800666 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-sb\") pod \"3c0e04d1-d91d-4fc2-94ec-c46489638389\" (UID: \"3c0e04d1-d91d-4fc2-94ec-c46489638389\") " Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.805193 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0e04d1-d91d-4fc2-94ec-c46489638389-kube-api-access-hwhr5" (OuterVolumeSpecName: "kube-api-access-hwhr5") pod "3c0e04d1-d91d-4fc2-94ec-c46489638389" (UID: "3c0e04d1-d91d-4fc2-94ec-c46489638389"). InnerVolumeSpecName "kube-api-access-hwhr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.836006 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c0e04d1-d91d-4fc2-94ec-c46489638389" (UID: "3c0e04d1-d91d-4fc2-94ec-c46489638389"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.836112 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c0e04d1-d91d-4fc2-94ec-c46489638389" (UID: "3c0e04d1-d91d-4fc2-94ec-c46489638389"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.842765 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c0e04d1-d91d-4fc2-94ec-c46489638389" (UID: "3c0e04d1-d91d-4fc2-94ec-c46489638389"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.843123 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-config" (OuterVolumeSpecName: "config") pod "3c0e04d1-d91d-4fc2-94ec-c46489638389" (UID: "3c0e04d1-d91d-4fc2-94ec-c46489638389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.849856 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c0e04d1-d91d-4fc2-94ec-c46489638389" (UID: "3c0e04d1-d91d-4fc2-94ec-c46489638389"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.869139 4892 scope.go:117] "RemoveContainer" containerID="19b2fc53b36e767062f5642dc040438b099f2c4499939b28053a6a3f31123a66" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.902586 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhr5\" (UniqueName: \"kubernetes.io/projected/3c0e04d1-d91d-4fc2-94ec-c46489638389-kube-api-access-hwhr5\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.902626 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.902637 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.902647 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.902657 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:40 crc kubenswrapper[4892]: I0122 09:27:40.902668 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0e04d1-d91d-4fc2-94ec-c46489638389-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.056770 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-4t885"] Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.062307 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-4t885"] Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.427857 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" path="/var/lib/kubelet/pods/3c0e04d1-d91d-4fc2-94ec-c46489638389/volumes" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.637485 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.649543 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.747392 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" event={"ID":"7ca36c0e-2948-479f-88f8-f3ccf747bafd","Type":"ContainerStarted","Data":"b167de06fa93b0c04f3a1410f762b566ffbef6a7e9bf3753205e8d4ef27e9fa2"} Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.748122 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.784393 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" podStartSLOduration=7.784373168 podStartE2EDuration="7.784373168s" podCreationTimestamp="2026-01-22 09:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:41.778608489 +0000 UTC m=+1031.622687562" watchObservedRunningTime="2026-01-22 09:27:41.784373168 +0000 UTC m=+1031.628452231" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.945065 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pv7r2"] Jan 22 09:27:41 crc kubenswrapper[4892]: E0122 09:27:41.945457 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="dnsmasq-dns" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.945475 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="dnsmasq-dns" Jan 22 09:27:41 crc kubenswrapper[4892]: E0122 09:27:41.945496 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="init" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.945505 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="init" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.945707 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0e04d1-d91d-4fc2-94ec-c46489638389" containerName="dnsmasq-dns" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.946339 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:41 crc kubenswrapper[4892]: I0122 09:27:41.969623 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pv7r2"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.025419 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-operator-scripts\") pod \"cinder-db-create-pv7r2\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.025475 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlk2\" (UniqueName: \"kubernetes.io/projected/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-kube-api-access-qdlk2\") pod \"cinder-db-create-pv7r2\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.045940 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jnldh"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.046962 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.056337 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8e4a-account-create-update-xv7gx"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.057325 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.059805 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.064348 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jnldh"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.079688 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e4a-account-create-update-xv7gx"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.126689 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-operator-scripts\") pod \"cinder-db-create-pv7r2\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.126752 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlk2\" (UniqueName: \"kubernetes.io/projected/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-kube-api-access-qdlk2\") pod \"cinder-db-create-pv7r2\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.126789 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4thn\" (UniqueName: \"kubernetes.io/projected/589bb094-75ad-4bc7-bf98-f5efaade599d-kube-api-access-b4thn\") pod \"barbican-db-create-jnldh\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.126881 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589bb094-75ad-4bc7-bf98-f5efaade599d-operator-scripts\") pod \"barbican-db-create-jnldh\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.127674 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-operator-scripts\") pod \"cinder-db-create-pv7r2\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.139418 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f0f0-account-create-update-v8np7"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.140360 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.145040 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.161948 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlk2\" (UniqueName: \"kubernetes.io/projected/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-kube-api-access-qdlk2\") pod \"cinder-db-create-pv7r2\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.170081 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f0f0-account-create-update-v8np7"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.228556 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-operator-scripts\") pod \"barbican-8e4a-account-create-update-xv7gx\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.228614 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghm4\" (UniqueName: \"kubernetes.io/projected/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-kube-api-access-cghm4\") pod \"cinder-f0f0-account-create-update-v8np7\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.228812 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4thn\" (UniqueName: \"kubernetes.io/projected/589bb094-75ad-4bc7-bf98-f5efaade599d-kube-api-access-b4thn\") pod \"barbican-db-create-jnldh\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.228871 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr55v\" (UniqueName: \"kubernetes.io/projected/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-kube-api-access-tr55v\") pod \"barbican-8e4a-account-create-update-xv7gx\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.228943 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-operator-scripts\") pod \"cinder-f0f0-account-create-update-v8np7\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.229089 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589bb094-75ad-4bc7-bf98-f5efaade599d-operator-scripts\") pod \"barbican-db-create-jnldh\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.229788 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589bb094-75ad-4bc7-bf98-f5efaade599d-operator-scripts\") pod \"barbican-db-create-jnldh\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.244774 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4thn\" (UniqueName: \"kubernetes.io/projected/589bb094-75ad-4bc7-bf98-f5efaade599d-kube-api-access-b4thn\") pod \"barbican-db-create-jnldh\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.299687 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8prsr"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.300967 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.302960 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.306747 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.306867 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sdhhd" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.309524 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.310887 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8prsr"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.321955 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.344114 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-operator-scripts\") pod \"barbican-8e4a-account-create-update-xv7gx\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.344216 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghm4\" (UniqueName: \"kubernetes.io/projected/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-kube-api-access-cghm4\") pod \"cinder-f0f0-account-create-update-v8np7\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.344371 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr55v\" (UniqueName: \"kubernetes.io/projected/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-kube-api-access-tr55v\") pod \"barbican-8e4a-account-create-update-xv7gx\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.344445 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-operator-scripts\") pod \"cinder-f0f0-account-create-update-v8np7\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.345155 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-operator-scripts\") pod \"cinder-f0f0-account-create-update-v8np7\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.345704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-operator-scripts\") pod \"barbican-8e4a-account-create-update-xv7gx\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.367499 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr55v\" (UniqueName: \"kubernetes.io/projected/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-kube-api-access-tr55v\") pod \"barbican-8e4a-account-create-update-xv7gx\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.370861 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghm4\" (UniqueName: \"kubernetes.io/projected/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-kube-api-access-cghm4\") pod \"cinder-f0f0-account-create-update-v8np7\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.370950 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.375955 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.435353 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ca79-account-create-update-rhltg"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.436689 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.442600 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.446478 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st24d\" (UniqueName: \"kubernetes.io/projected/a418ac05-1cad-45b4-a9b6-74b4db83248f-kube-api-access-st24d\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.446542 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-config-data\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.446586 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-combined-ca-bundle\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.446765 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x5mtp"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.447977 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.456136 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ca79-account-create-update-rhltg"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.456438 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.462696 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x5mtp"] Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548252 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-config-data\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548573 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-combined-ca-bundle\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548661 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec122921-e058-41f5-932e-836e78d5c91e-operator-scripts\") pod \"neutron-ca79-account-create-update-rhltg\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548692 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjmr\" (UniqueName: \"kubernetes.io/projected/ec122921-e058-41f5-932e-836e78d5c91e-kube-api-access-fgjmr\") pod \"neutron-ca79-account-create-update-rhltg\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548716 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a1df4bd-7cb1-40b0-88f7-578961c621cb-operator-scripts\") pod \"neutron-db-create-x5mtp\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548733 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcrr\" (UniqueName: \"kubernetes.io/projected/3a1df4bd-7cb1-40b0-88f7-578961c621cb-kube-api-access-dkcrr\") pod \"neutron-db-create-x5mtp\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.548770 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st24d\" (UniqueName: \"kubernetes.io/projected/a418ac05-1cad-45b4-a9b6-74b4db83248f-kube-api-access-st24d\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.552792 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-config-data\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.557491 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-combined-ca-bundle\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.577650 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st24d\" (UniqueName: \"kubernetes.io/projected/a418ac05-1cad-45b4-a9b6-74b4db83248f-kube-api-access-st24d\") pod \"keystone-db-sync-8prsr\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.616192 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.651237 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec122921-e058-41f5-932e-836e78d5c91e-operator-scripts\") pod \"neutron-ca79-account-create-update-rhltg\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.651308 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgjmr\" (UniqueName: \"kubernetes.io/projected/ec122921-e058-41f5-932e-836e78d5c91e-kube-api-access-fgjmr\") pod \"neutron-ca79-account-create-update-rhltg\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.651337 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a1df4bd-7cb1-40b0-88f7-578961c621cb-operator-scripts\") pod \"neutron-db-create-x5mtp\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.651355 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcrr\" (UniqueName: \"kubernetes.io/projected/3a1df4bd-7cb1-40b0-88f7-578961c621cb-kube-api-access-dkcrr\") pod \"neutron-db-create-x5mtp\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.652371 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a1df4bd-7cb1-40b0-88f7-578961c621cb-operator-scripts\") pod \"neutron-db-create-x5mtp\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:42 crc kubenswrapper[4892]: I0122 09:27:42.653515 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec122921-e058-41f5-932e-836e78d5c91e-operator-scripts\") pod \"neutron-ca79-account-create-update-rhltg\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:42.673349 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgjmr\" (UniqueName: \"kubernetes.io/projected/ec122921-e058-41f5-932e-836e78d5c91e-kube-api-access-fgjmr\") pod \"neutron-ca79-account-create-update-rhltg\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:42.674691 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcrr\" (UniqueName: \"kubernetes.io/projected/3a1df4bd-7cb1-40b0-88f7-578961c621cb-kube-api-access-dkcrr\") pod \"neutron-db-create-x5mtp\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:42.771902 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:42.790685 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.451242 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jnldh"] Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.460246 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pv7r2"] Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.469041 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e4a-account-create-update-xv7gx"] Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.592130 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ca79-account-create-update-rhltg"] Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.618685 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f0f0-account-create-update-v8np7"] Jan 22 09:27:43 crc kubenswrapper[4892]: W0122 09:27:43.621379 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec122921_e058_41f5_932e_836e78d5c91e.slice/crio-8c84225343cf9942a92c997afc6f88515b8d4be253135e43915322ff56588ca5 WatchSource:0}: Error finding container 8c84225343cf9942a92c997afc6f88515b8d4be253135e43915322ff56588ca5: Status 404 returned error can't find the container with id 8c84225343cf9942a92c997afc6f88515b8d4be253135e43915322ff56588ca5 Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.631032 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8prsr"] Jan 22 09:27:43 crc kubenswrapper[4892]: W0122 09:27:43.635505 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c59bcbd_6655_491c_82b4_9ca9ed61ff8c.slice/crio-e3eb7b21e1bdc0927384c503ab5e4130706c322d2718e0c4da8296202003d69c WatchSource:0}: Error finding container e3eb7b21e1bdc0927384c503ab5e4130706c322d2718e0c4da8296202003d69c: Status 404 returned error can't find the container with id e3eb7b21e1bdc0927384c503ab5e4130706c322d2718e0c4da8296202003d69c Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.637005 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x5mtp"] Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.772511 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x5mtp" event={"ID":"3a1df4bd-7cb1-40b0-88f7-578961c621cb","Type":"ContainerStarted","Data":"6c87b198a3399ad26500b78441703463be999e4fa320e08748032c8c40d51731"} Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.773691 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e4a-account-create-update-xv7gx" event={"ID":"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96","Type":"ContainerStarted","Data":"1e729ae4ff9ffae6863199549b292b55ecf7a38f71950a11e0843a39d29ea8c4"} Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.775678 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ca79-account-create-update-rhltg" event={"ID":"ec122921-e058-41f5-932e-836e78d5c91e","Type":"ContainerStarted","Data":"8c84225343cf9942a92c997afc6f88515b8d4be253135e43915322ff56588ca5"} Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.777169 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jnldh" event={"ID":"589bb094-75ad-4bc7-bf98-f5efaade599d","Type":"ContainerStarted","Data":"86a3d2458d24e75edc76e370675c4470cd0ff33faecaca2af1d083d3f463bd4d"} Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.778313 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8prsr" event={"ID":"a418ac05-1cad-45b4-a9b6-74b4db83248f","Type":"ContainerStarted","Data":"a694e80d143a1ef0d2117f47afbadd697035118ed03b41b864aa6b26522dcee0"} Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.779682 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pv7r2" event={"ID":"6fe42496-0b56-4cb9-a100-7098f1ecd0ae","Type":"ContainerStarted","Data":"18e60d76837f86e4d4f0a4334f072cde76f7f674763864c6f4660dcc9ea1e7bf"} Jan 22 09:27:43 crc kubenswrapper[4892]: I0122 09:27:43.781390 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0f0-account-create-update-v8np7" event={"ID":"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c","Type":"ContainerStarted","Data":"e3eb7b21e1bdc0927384c503ab5e4130706c322d2718e0c4da8296202003d69c"} Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.789981 4892 generic.go:334] "Generic (PLEG): container finished" podID="6fe42496-0b56-4cb9-a100-7098f1ecd0ae" containerID="e3c2e779e5dcd1cee2c997526478080cdede4a0735aa75097bd9e085e344773b" exitCode=0 Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.790050 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pv7r2" event={"ID":"6fe42496-0b56-4cb9-a100-7098f1ecd0ae","Type":"ContainerDied","Data":"e3c2e779e5dcd1cee2c997526478080cdede4a0735aa75097bd9e085e344773b"} Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.791914 4892 generic.go:334] "Generic (PLEG): container finished" podID="3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" containerID="c25908dd7f23d20fc7113b2c38ba023e8a767178e0237b2cb1f15fbb70f5bb65" exitCode=0 Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.791974 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0f0-account-create-update-v8np7" event={"ID":"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c","Type":"ContainerDied","Data":"c25908dd7f23d20fc7113b2c38ba023e8a767178e0237b2cb1f15fbb70f5bb65"} Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.794304 4892 generic.go:334] "Generic (PLEG): container finished" podID="3a1df4bd-7cb1-40b0-88f7-578961c621cb" containerID="201bac64f24ee694da1bc396467266b71da0c341c817ab543cb7f02b2c98a970" exitCode=0 Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.794358 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x5mtp" event={"ID":"3a1df4bd-7cb1-40b0-88f7-578961c621cb","Type":"ContainerDied","Data":"201bac64f24ee694da1bc396467266b71da0c341c817ab543cb7f02b2c98a970"} Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.796027 4892 generic.go:334] "Generic (PLEG): container finished" podID="6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" containerID="18acfe35740859100c7341047b7e931f59f03ad4182587d633bad127393e567b" exitCode=0 Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.796070 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e4a-account-create-update-xv7gx" event={"ID":"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96","Type":"ContainerDied","Data":"18acfe35740859100c7341047b7e931f59f03ad4182587d633bad127393e567b"} Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.798918 4892 generic.go:334] "Generic (PLEG): container finished" podID="ec122921-e058-41f5-932e-836e78d5c91e" containerID="8564ac0ef64439d2b0c4bafeb6ed71771923e7608c9f5665e230751f0f0424cd" exitCode=0 Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.798962 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ca79-account-create-update-rhltg" event={"ID":"ec122921-e058-41f5-932e-836e78d5c91e","Type":"ContainerDied","Data":"8564ac0ef64439d2b0c4bafeb6ed71771923e7608c9f5665e230751f0f0424cd"} Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.800551 4892 generic.go:334] "Generic (PLEG): container finished" podID="589bb094-75ad-4bc7-bf98-f5efaade599d" containerID="d88ef14bb125b501c5b1f2c747b133418527f116acd157c43883ca406c8067a1" exitCode=0 Jan 22 09:27:44 crc kubenswrapper[4892]: I0122 09:27:44.800596 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jnldh" event={"ID":"589bb094-75ad-4bc7-bf98-f5efaade599d","Type":"ContainerDied","Data":"d88ef14bb125b501c5b1f2c747b133418527f116acd157c43883ca406c8067a1"} Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.323868 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.323930 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.323983 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.324735 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"117e0c1b92dcf102d5c4006956ffbc6d1b9e2073ac26c26fea7a169bb0945ba2"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.324800 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://117e0c1b92dcf102d5c4006956ffbc6d1b9e2073ac26c26fea7a169bb0945ba2" gracePeriod=600 Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.818850 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="117e0c1b92dcf102d5c4006956ffbc6d1b9e2073ac26c26fea7a169bb0945ba2" exitCode=0 Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.818952 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"117e0c1b92dcf102d5c4006956ffbc6d1b9e2073ac26c26fea7a169bb0945ba2"} Jan 22 09:27:46 crc kubenswrapper[4892]: I0122 09:27:46.819243 4892 scope.go:117] "RemoveContainer" containerID="f3241bed9938434158615102d7fd185345d457bb0f2990573e82de1469f205ee" Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.835041 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pv7r2" event={"ID":"6fe42496-0b56-4cb9-a100-7098f1ecd0ae","Type":"ContainerDied","Data":"18e60d76837f86e4d4f0a4334f072cde76f7f674763864c6f4660dcc9ea1e7bf"} Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.835406 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e60d76837f86e4d4f0a4334f072cde76f7f674763864c6f4660dcc9ea1e7bf" Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.836925 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0f0-account-create-update-v8np7" event={"ID":"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c","Type":"ContainerDied","Data":"e3eb7b21e1bdc0927384c503ab5e4130706c322d2718e0c4da8296202003d69c"} Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.836963 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3eb7b21e1bdc0927384c503ab5e4130706c322d2718e0c4da8296202003d69c" Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.838413 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x5mtp" event={"ID":"3a1df4bd-7cb1-40b0-88f7-578961c621cb","Type":"ContainerDied","Data":"6c87b198a3399ad26500b78441703463be999e4fa320e08748032c8c40d51731"} Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.838557 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c87b198a3399ad26500b78441703463be999e4fa320e08748032c8c40d51731" Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.839702 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e4a-account-create-update-xv7gx" event={"ID":"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96","Type":"ContainerDied","Data":"1e729ae4ff9ffae6863199549b292b55ecf7a38f71950a11e0843a39d29ea8c4"} Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.839725 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e729ae4ff9ffae6863199549b292b55ecf7a38f71950a11e0843a39d29ea8c4" Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.840746 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ca79-account-create-update-rhltg" event={"ID":"ec122921-e058-41f5-932e-836e78d5c91e","Type":"ContainerDied","Data":"8c84225343cf9942a92c997afc6f88515b8d4be253135e43915322ff56588ca5"} Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.840768 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c84225343cf9942a92c997afc6f88515b8d4be253135e43915322ff56588ca5" Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.842499 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jnldh" event={"ID":"589bb094-75ad-4bc7-bf98-f5efaade599d","Type":"ContainerDied","Data":"86a3d2458d24e75edc76e370675c4470cd0ff33faecaca2af1d083d3f463bd4d"} Jan 22 09:27:47 crc kubenswrapper[4892]: I0122 09:27:47.842519 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a3d2458d24e75edc76e370675c4470cd0ff33faecaca2af1d083d3f463bd4d" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.007428 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.071722 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.082842 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.120236 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.129375 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.150672 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.172861 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgjmr\" (UniqueName: \"kubernetes.io/projected/ec122921-e058-41f5-932e-836e78d5c91e-kube-api-access-fgjmr\") pod \"ec122921-e058-41f5-932e-836e78d5c91e\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.172959 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr55v\" (UniqueName: \"kubernetes.io/projected/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-kube-api-access-tr55v\") pod \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.172989 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec122921-e058-41f5-932e-836e78d5c91e-operator-scripts\") pod \"ec122921-e058-41f5-932e-836e78d5c91e\" (UID: \"ec122921-e058-41f5-932e-836e78d5c91e\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173039 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589bb094-75ad-4bc7-bf98-f5efaade599d-operator-scripts\") pod \"589bb094-75ad-4bc7-bf98-f5efaade599d\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173076 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4thn\" (UniqueName: \"kubernetes.io/projected/589bb094-75ad-4bc7-bf98-f5efaade599d-kube-api-access-b4thn\") pod \"589bb094-75ad-4bc7-bf98-f5efaade599d\" (UID: \"589bb094-75ad-4bc7-bf98-f5efaade599d\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173132 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkcrr\" (UniqueName: \"kubernetes.io/projected/3a1df4bd-7cb1-40b0-88f7-578961c621cb-kube-api-access-dkcrr\") pod \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173162 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a1df4bd-7cb1-40b0-88f7-578961c621cb-operator-scripts\") pod \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\" (UID: \"3a1df4bd-7cb1-40b0-88f7-578961c621cb\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173191 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-operator-scripts\") pod \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173225 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdlk2\" (UniqueName: \"kubernetes.io/projected/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-kube-api-access-qdlk2\") pod \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173321 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-operator-scripts\") pod \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\" (UID: \"6bc4eac9-d3bb-4c29-90e4-bb35bec79c96\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173360 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-operator-scripts\") pod \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\" (UID: \"6fe42496-0b56-4cb9-a100-7098f1ecd0ae\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.173497 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cghm4\" (UniqueName: \"kubernetes.io/projected/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-kube-api-access-cghm4\") pod \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\" (UID: \"3c59bcbd-6655-491c-82b4-9ca9ed61ff8c\") " Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.175258 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" (UID: "6bc4eac9-d3bb-4c29-90e4-bb35bec79c96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.175737 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1df4bd-7cb1-40b0-88f7-578961c621cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a1df4bd-7cb1-40b0-88f7-578961c621cb" (UID: "3a1df4bd-7cb1-40b0-88f7-578961c621cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.175975 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fe42496-0b56-4cb9-a100-7098f1ecd0ae" (UID: "6fe42496-0b56-4cb9-a100-7098f1ecd0ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.175981 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.176023 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a1df4bd-7cb1-40b0-88f7-578961c621cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.176526 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589bb094-75ad-4bc7-bf98-f5efaade599d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "589bb094-75ad-4bc7-bf98-f5efaade599d" (UID: "589bb094-75ad-4bc7-bf98-f5efaade599d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.178407 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec122921-e058-41f5-932e-836e78d5c91e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec122921-e058-41f5-932e-836e78d5c91e" (UID: "ec122921-e058-41f5-932e-836e78d5c91e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.180658 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-kube-api-access-tr55v" (OuterVolumeSpecName: "kube-api-access-tr55v") pod "6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" (UID: "6bc4eac9-d3bb-4c29-90e4-bb35bec79c96"). InnerVolumeSpecName "kube-api-access-tr55v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.182875 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-kube-api-access-cghm4" (OuterVolumeSpecName: "kube-api-access-cghm4") pod "3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" (UID: "3c59bcbd-6655-491c-82b4-9ca9ed61ff8c"). InnerVolumeSpecName "kube-api-access-cghm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.183815 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-kube-api-access-qdlk2" (OuterVolumeSpecName: "kube-api-access-qdlk2") pod "6fe42496-0b56-4cb9-a100-7098f1ecd0ae" (UID: "6fe42496-0b56-4cb9-a100-7098f1ecd0ae"). InnerVolumeSpecName "kube-api-access-qdlk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.184064 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" (UID: "3c59bcbd-6655-491c-82b4-9ca9ed61ff8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.185747 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589bb094-75ad-4bc7-bf98-f5efaade599d-kube-api-access-b4thn" (OuterVolumeSpecName: "kube-api-access-b4thn") pod "589bb094-75ad-4bc7-bf98-f5efaade599d" (UID: "589bb094-75ad-4bc7-bf98-f5efaade599d"). InnerVolumeSpecName "kube-api-access-b4thn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.187436 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1df4bd-7cb1-40b0-88f7-578961c621cb-kube-api-access-dkcrr" (OuterVolumeSpecName: "kube-api-access-dkcrr") pod "3a1df4bd-7cb1-40b0-88f7-578961c621cb" (UID: "3a1df4bd-7cb1-40b0-88f7-578961c621cb"). InnerVolumeSpecName "kube-api-access-dkcrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.188972 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec122921-e058-41f5-932e-836e78d5c91e-kube-api-access-fgjmr" (OuterVolumeSpecName: "kube-api-access-fgjmr") pod "ec122921-e058-41f5-932e-836e78d5c91e" (UID: "ec122921-e058-41f5-932e-836e78d5c91e"). InnerVolumeSpecName "kube-api-access-fgjmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276709 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cghm4\" (UniqueName: \"kubernetes.io/projected/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-kube-api-access-cghm4\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276761 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgjmr\" (UniqueName: \"kubernetes.io/projected/ec122921-e058-41f5-932e-836e78d5c91e-kube-api-access-fgjmr\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276774 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr55v\" (UniqueName: \"kubernetes.io/projected/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96-kube-api-access-tr55v\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276787 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec122921-e058-41f5-932e-836e78d5c91e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276799 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589bb094-75ad-4bc7-bf98-f5efaade599d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276810 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4thn\" (UniqueName: \"kubernetes.io/projected/589bb094-75ad-4bc7-bf98-f5efaade599d-kube-api-access-b4thn\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276821 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkcrr\" (UniqueName: \"kubernetes.io/projected/3a1df4bd-7cb1-40b0-88f7-578961c621cb-kube-api-access-dkcrr\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276832 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276843 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdlk2\" (UniqueName: \"kubernetes.io/projected/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-kube-api-access-qdlk2\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.276855 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe42496-0b56-4cb9-a100-7098f1ecd0ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.854223 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"31a997f31663709d14ae5efb219a31b8ac9b066d6e93055a348ee5203f0f3774"} Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856135 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pv7r2" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856202 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e4a-account-create-update-xv7gx" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856207 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca79-account-create-update-rhltg" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856253 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5mtp" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856309 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0f0-account-create-update-v8np7" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856312 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8prsr" event={"ID":"a418ac05-1cad-45b4-a9b6-74b4db83248f","Type":"ContainerStarted","Data":"10f0328e9097065a0095b2a4d49ae79fa35db2252bcc5d506f3ef8834793dd9b"} Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.856340 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jnldh" Jan 22 09:27:48 crc kubenswrapper[4892]: I0122 09:27:48.906418 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8prsr" podStartSLOduration=2.729470605 podStartE2EDuration="6.906393149s" podCreationTimestamp="2026-01-22 09:27:42 +0000 UTC" firstStartedPulling="2026-01-22 09:27:43.624329325 +0000 UTC m=+1033.468408388" lastFinishedPulling="2026-01-22 09:27:47.801251859 +0000 UTC m=+1037.645330932" observedRunningTime="2026-01-22 09:27:48.895722673 +0000 UTC m=+1038.739801736" watchObservedRunningTime="2026-01-22 09:27:48.906393149 +0000 UTC m=+1038.750472232" Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.365431 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.456908 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-4flns"] Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.457466 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" podUID="d66d181d-3705-4a58-8091-2bef209b355c" containerName="dnsmasq-dns" containerID="cri-o://1802c480527504a240f7ff8eec03d66747e8800343ff31f84f31e19dd58c9f50" gracePeriod=10 Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.864475 4892 generic.go:334] "Generic (PLEG): container finished" podID="d66d181d-3705-4a58-8091-2bef209b355c" containerID="1802c480527504a240f7ff8eec03d66747e8800343ff31f84f31e19dd58c9f50" exitCode=0 Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.864562 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" event={"ID":"d66d181d-3705-4a58-8091-2bef209b355c","Type":"ContainerDied","Data":"1802c480527504a240f7ff8eec03d66747e8800343ff31f84f31e19dd58c9f50"} Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.865121 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" event={"ID":"d66d181d-3705-4a58-8091-2bef209b355c","Type":"ContainerDied","Data":"c8e4d7870a229d7623714c1e2f5107cc627b5a09f53ac52a847489dffd01581b"} Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.865136 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e4d7870a229d7623714c1e2f5107cc627b5a09f53ac52a847489dffd01581b" Jan 22 09:27:49 crc kubenswrapper[4892]: I0122 09:27:49.918424 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.112998 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-sb\") pod \"d66d181d-3705-4a58-8091-2bef209b355c\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.113068 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-nb\") pod \"d66d181d-3705-4a58-8091-2bef209b355c\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.113347 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-dns-svc\") pod \"d66d181d-3705-4a58-8091-2bef209b355c\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.113421 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcmdw\" (UniqueName: \"kubernetes.io/projected/d66d181d-3705-4a58-8091-2bef209b355c-kube-api-access-jcmdw\") pod \"d66d181d-3705-4a58-8091-2bef209b355c\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.113455 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-config\") pod \"d66d181d-3705-4a58-8091-2bef209b355c\" (UID: \"d66d181d-3705-4a58-8091-2bef209b355c\") " Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.119033 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66d181d-3705-4a58-8091-2bef209b355c-kube-api-access-jcmdw" (OuterVolumeSpecName: "kube-api-access-jcmdw") pod "d66d181d-3705-4a58-8091-2bef209b355c" (UID: "d66d181d-3705-4a58-8091-2bef209b355c"). InnerVolumeSpecName "kube-api-access-jcmdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.156423 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d66d181d-3705-4a58-8091-2bef209b355c" (UID: "d66d181d-3705-4a58-8091-2bef209b355c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.170027 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-config" (OuterVolumeSpecName: "config") pod "d66d181d-3705-4a58-8091-2bef209b355c" (UID: "d66d181d-3705-4a58-8091-2bef209b355c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.170567 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d66d181d-3705-4a58-8091-2bef209b355c" (UID: "d66d181d-3705-4a58-8091-2bef209b355c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.171307 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d66d181d-3705-4a58-8091-2bef209b355c" (UID: "d66d181d-3705-4a58-8091-2bef209b355c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.215506 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.215742 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcmdw\" (UniqueName: \"kubernetes.io/projected/d66d181d-3705-4a58-8091-2bef209b355c-kube-api-access-jcmdw\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.215803 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.215855 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.215911 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66d181d-3705-4a58-8091-2bef209b355c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.873417 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-4flns" Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.911318 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-4flns"] Jan 22 09:27:50 crc kubenswrapper[4892]: I0122 09:27:50.924560 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-4flns"] Jan 22 09:27:51 crc kubenswrapper[4892]: I0122 09:27:51.430632 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66d181d-3705-4a58-8091-2bef209b355c" path="/var/lib/kubelet/pods/d66d181d-3705-4a58-8091-2bef209b355c/volumes" Jan 22 09:27:51 crc kubenswrapper[4892]: I0122 09:27:51.883113 4892 generic.go:334] "Generic (PLEG): container finished" podID="a418ac05-1cad-45b4-a9b6-74b4db83248f" containerID="10f0328e9097065a0095b2a4d49ae79fa35db2252bcc5d506f3ef8834793dd9b" exitCode=0 Jan 22 09:27:51 crc kubenswrapper[4892]: I0122 09:27:51.883151 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8prsr" event={"ID":"a418ac05-1cad-45b4-a9b6-74b4db83248f","Type":"ContainerDied","Data":"10f0328e9097065a0095b2a4d49ae79fa35db2252bcc5d506f3ef8834793dd9b"} Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.186791 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.365821 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st24d\" (UniqueName: \"kubernetes.io/projected/a418ac05-1cad-45b4-a9b6-74b4db83248f-kube-api-access-st24d\") pod \"a418ac05-1cad-45b4-a9b6-74b4db83248f\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.365950 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-config-data\") pod \"a418ac05-1cad-45b4-a9b6-74b4db83248f\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.365993 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-combined-ca-bundle\") pod \"a418ac05-1cad-45b4-a9b6-74b4db83248f\" (UID: \"a418ac05-1cad-45b4-a9b6-74b4db83248f\") " Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.371458 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a418ac05-1cad-45b4-a9b6-74b4db83248f-kube-api-access-st24d" (OuterVolumeSpecName: "kube-api-access-st24d") pod "a418ac05-1cad-45b4-a9b6-74b4db83248f" (UID: "a418ac05-1cad-45b4-a9b6-74b4db83248f"). InnerVolumeSpecName "kube-api-access-st24d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.386904 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a418ac05-1cad-45b4-a9b6-74b4db83248f" (UID: "a418ac05-1cad-45b4-a9b6-74b4db83248f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.406181 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-config-data" (OuterVolumeSpecName: "config-data") pod "a418ac05-1cad-45b4-a9b6-74b4db83248f" (UID: "a418ac05-1cad-45b4-a9b6-74b4db83248f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.467871 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st24d\" (UniqueName: \"kubernetes.io/projected/a418ac05-1cad-45b4-a9b6-74b4db83248f-kube-api-access-st24d\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.467900 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.467909 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a418ac05-1cad-45b4-a9b6-74b4db83248f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.904215 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8prsr" event={"ID":"a418ac05-1cad-45b4-a9b6-74b4db83248f","Type":"ContainerDied","Data":"a694e80d143a1ef0d2117f47afbadd697035118ed03b41b864aa6b26522dcee0"} Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.904489 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a694e80d143a1ef0d2117f47afbadd697035118ed03b41b864aa6b26522dcee0" Jan 22 09:27:53 crc kubenswrapper[4892]: I0122 09:27:53.904322 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8prsr" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.086730 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-tk4sv"] Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087045 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1df4bd-7cb1-40b0-88f7-578961c621cb" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087060 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1df4bd-7cb1-40b0-88f7-578961c621cb" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087074 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589bb094-75ad-4bc7-bf98-f5efaade599d" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087083 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="589bb094-75ad-4bc7-bf98-f5efaade599d" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087091 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087097 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087107 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66d181d-3705-4a58-8091-2bef209b355c" containerName="dnsmasq-dns" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087112 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66d181d-3705-4a58-8091-2bef209b355c" containerName="dnsmasq-dns" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087123 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec122921-e058-41f5-932e-836e78d5c91e" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087129 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec122921-e058-41f5-932e-836e78d5c91e" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087141 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a418ac05-1cad-45b4-a9b6-74b4db83248f" containerName="keystone-db-sync" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087146 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a418ac05-1cad-45b4-a9b6-74b4db83248f" containerName="keystone-db-sync" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087157 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe42496-0b56-4cb9-a100-7098f1ecd0ae" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087162 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe42496-0b56-4cb9-a100-7098f1ecd0ae" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087177 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087183 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: E0122 09:27:54.087193 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66d181d-3705-4a58-8091-2bef209b355c" containerName="init" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087200 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66d181d-3705-4a58-8091-2bef209b355c" containerName="init" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087396 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe42496-0b56-4cb9-a100-7098f1ecd0ae" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087416 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66d181d-3705-4a58-8091-2bef209b355c" containerName="dnsmasq-dns" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087425 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a418ac05-1cad-45b4-a9b6-74b4db83248f" containerName="keystone-db-sync" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087436 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1df4bd-7cb1-40b0-88f7-578961c621cb" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087444 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec122921-e058-41f5-932e-836e78d5c91e" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087452 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087461 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="589bb094-75ad-4bc7-bf98-f5efaade599d" containerName="mariadb-database-create" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.087469 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" containerName="mariadb-account-create-update" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.088195 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.101712 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-tk4sv"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.121353 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x9wpl"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.122350 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.126227 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.126415 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.126612 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.126802 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sdhhd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.126945 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.136612 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x9wpl"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293206 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293272 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-combined-ca-bundle\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293399 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmz2\" (UniqueName: \"kubernetes.io/projected/cf83c5a9-ad08-4afb-b3e4-50690745a486-kube-api-access-9tmz2\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293420 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtjf\" (UniqueName: \"kubernetes.io/projected/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-kube-api-access-8rtjf\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293438 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-config-data\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293469 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-scripts\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293490 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-credential-keys\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293530 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293546 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293562 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293586 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-config\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.293604 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-fernet-keys\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.321488 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b6778c569-gt8dk"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.322806 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.328464 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.328916 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.329082 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.329234 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-twtw7" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.348040 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b6778c569-gt8dk"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.361419 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5zvl8"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.362470 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.380959 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.381183 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rj7lp" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.398336 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5zvl8"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399071 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399107 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-config\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399127 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-fernet-keys\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399160 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399194 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-combined-ca-bundle\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399221 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmz2\" (UniqueName: \"kubernetes.io/projected/cf83c5a9-ad08-4afb-b3e4-50690745a486-kube-api-access-9tmz2\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399240 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rtjf\" (UniqueName: \"kubernetes.io/projected/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-kube-api-access-8rtjf\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399256 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-config-data\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399357 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-scripts\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399383 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-credential-keys\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399618 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.399639 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.400429 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.400653 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-config\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.401153 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.401446 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.401512 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.402893 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.407248 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-credential-keys\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.410495 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-fernet-keys\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.413235 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-scripts\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.420751 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-combined-ca-bundle\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.425049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-config-data\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.453558 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rtjf\" (UniqueName: \"kubernetes.io/projected/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-kube-api-access-8rtjf\") pod \"dnsmasq-dns-54b4bb76d5-tk4sv\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.453985 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmz2\" (UniqueName: \"kubernetes.io/projected/cf83c5a9-ad08-4afb-b3e4-50690745a486-kube-api-access-9tmz2\") pod \"keystone-bootstrap-x9wpl\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.462006 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.474557 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.472528 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.492177 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.492846 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.503816 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4q2\" (UniqueName: \"kubernetes.io/projected/d02e1363-2043-4097-bda0-012158a0bf56-kube-api-access-ft4q2\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.504167 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-config\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.504314 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02e1363-2043-4097-bda0-012158a0bf56-horizon-secret-key\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.504703 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-scripts\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.504859 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprmj\" (UniqueName: \"kubernetes.io/projected/06ba4135-00fc-4891-bad5-e2e666eabd91-kube-api-access-xprmj\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.504986 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-combined-ca-bundle\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.505082 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e1363-2043-4097-bda0-012158a0bf56-logs\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.505240 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-config-data\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.544061 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7gn6t"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.545251 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.548668 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5p5cg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.551772 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.551959 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.574045 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.608339 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7gn6t"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610548 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft4q2\" (UniqueName: \"kubernetes.io/projected/d02e1363-2043-4097-bda0-012158a0bf56-kube-api-access-ft4q2\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610583 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-config\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610639 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02e1363-2043-4097-bda0-012158a0bf56-horizon-secret-key\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610661 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610720 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-scripts\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610750 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610772 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-config-data\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610794 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xprmj\" (UniqueName: \"kubernetes.io/projected/06ba4135-00fc-4891-bad5-e2e666eabd91-kube-api-access-xprmj\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610833 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-combined-ca-bundle\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610855 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e1363-2043-4097-bda0-012158a0bf56-logs\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610878 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-scripts\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610925 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610951 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-config-data\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.610977 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68sl5\" (UniqueName: \"kubernetes.io/projected/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-kube-api-access-68sl5\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.611001 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.618212 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-config-data\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.618478 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e1363-2043-4097-bda0-012158a0bf56-logs\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.620012 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-scripts\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.635859 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-combined-ca-bundle\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.641205 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-config\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.652669 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02e1363-2043-4097-bda0-012158a0bf56-horizon-secret-key\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.654369 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5zrrd"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.655353 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.657806 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft4q2\" (UniqueName: \"kubernetes.io/projected/d02e1363-2043-4097-bda0-012158a0bf56-kube-api-access-ft4q2\") pod \"horizon-5b6778c569-gt8dk\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.658120 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.668363 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.668831 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.669042 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rkjx8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.672993 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xprmj\" (UniqueName: \"kubernetes.io/projected/06ba4135-00fc-4891-bad5-e2e666eabd91-kube-api-access-xprmj\") pod \"neutron-db-sync-5zvl8\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.698393 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5zrrd"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.698756 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.712143 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75dc8c68f7-bpxpg"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.713460 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717490 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-db-sync-config-data\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717529 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f8xr\" (UniqueName: \"kubernetes.io/projected/8998452c-d0f3-42a2-8741-c70ffe854fda-kube-api-access-8f8xr\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717575 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717614 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-combined-ca-bundle\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717650 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8998452c-d0f3-42a2-8741-c70ffe854fda-etc-machine-id\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717681 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717700 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-config-data\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717740 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-scripts\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717764 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-config-data\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717789 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717817 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68sl5\" (UniqueName: \"kubernetes.io/projected/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-kube-api-access-68sl5\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717841 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-scripts\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.717870 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.719914 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.727115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.727365 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.727493 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lnmlq"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.728588 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.737329 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.739692 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.745005 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-scripts\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.748321 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.748488 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6sstj" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.750836 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75dc8c68f7-bpxpg"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.751163 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-config-data\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.758929 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68sl5\" (UniqueName: \"kubernetes.io/projected/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-kube-api-access-68sl5\") pod \"ceilometer-0\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.772342 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lnmlq"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.795400 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-tk4sv"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.829941 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca8ea69-b1df-4f64-b894-d1a33fedef9d-logs\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830004 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-combined-ca-bundle\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830039 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-db-sync-config-data\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830060 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-combined-ca-bundle\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830088 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8998452c-d0f3-42a2-8741-c70ffe854fda-etc-machine-id\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830110 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-scripts\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830145 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-scripts\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830166 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-config-data\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830188 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqm4\" (UniqueName: \"kubernetes.io/projected/eca8ea69-b1df-4f64-b894-d1a33fedef9d-kube-api-access-drqm4\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830210 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca8ea69-b1df-4f64-b894-d1a33fedef9d-horizon-secret-key\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830226 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-config-data\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830244 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-config-data\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830273 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmntc\" (UniqueName: \"kubernetes.io/projected/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-kube-api-access-wmntc\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830310 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-combined-ca-bundle\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830342 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-scripts\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830359 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jp6\" (UniqueName: \"kubernetes.io/projected/7fa10241-be09-4db5-894b-845654f34a21-kube-api-access-n8jp6\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830392 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-db-sync-config-data\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830439 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f8xr\" (UniqueName: \"kubernetes.io/projected/8998452c-d0f3-42a2-8741-c70ffe854fda-kube-api-access-8f8xr\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.830463 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-logs\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.831334 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8998452c-d0f3-42a2-8741-c70ffe854fda-etc-machine-id\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.859625 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-db-sync-config-data\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.878655 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-combined-ca-bundle\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.879094 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-scripts\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.879500 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-config-data\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.884169 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f8xr\" (UniqueName: \"kubernetes.io/projected/8998452c-d0f3-42a2-8741-c70ffe854fda-kube-api-access-8f8xr\") pod \"cinder-db-sync-7gn6t\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.907522 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-vjc2j"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.908865 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.926923 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931304 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-scripts\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931339 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-config-data\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931376 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drqm4\" (UniqueName: \"kubernetes.io/projected/eca8ea69-b1df-4f64-b894-d1a33fedef9d-kube-api-access-drqm4\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931400 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca8ea69-b1df-4f64-b894-d1a33fedef9d-horizon-secret-key\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931419 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-config-data\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931459 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmntc\" (UniqueName: \"kubernetes.io/projected/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-kube-api-access-wmntc\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931483 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-combined-ca-bundle\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931513 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jp6\" (UniqueName: \"kubernetes.io/projected/7fa10241-be09-4db5-894b-845654f34a21-kube-api-access-n8jp6\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931552 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-logs\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931574 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca8ea69-b1df-4f64-b894-d1a33fedef9d-logs\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931612 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-combined-ca-bundle\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931643 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-db-sync-config-data\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.931693 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-scripts\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.932787 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-scripts\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.933858 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-logs\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.935589 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-config-data\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.943652 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca8ea69-b1df-4f64-b894-d1a33fedef9d-logs\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.947917 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-combined-ca-bundle\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.948779 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca8ea69-b1df-4f64-b894-d1a33fedef9d-horizon-secret-key\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.959798 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-combined-ca-bundle\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.960095 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-scripts\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.961810 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drqm4\" (UniqueName: \"kubernetes.io/projected/eca8ea69-b1df-4f64-b894-d1a33fedef9d-kube-api-access-drqm4\") pod \"horizon-75dc8c68f7-bpxpg\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.962783 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.964044 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.965307 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-config-data\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.965514 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmntc\" (UniqueName: \"kubernetes.io/projected/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-kube-api-access-wmntc\") pod \"placement-db-sync-5zrrd\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.968501 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.968759 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.968934 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-w5vlr" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.969051 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 09:27:54 crc kubenswrapper[4892]: I0122 09:27:54.969881 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-db-sync-config-data\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:54.992368 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-vjc2j"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:54.992798 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jp6\" (UniqueName: \"kubernetes.io/projected/7fa10241-be09-4db5-894b-845654f34a21-kube-api-access-n8jp6\") pod \"barbican-db-sync-lnmlq\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:54.998799 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.035653 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.035727 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.035773 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.035796 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6tkf\" (UniqueName: \"kubernetes.io/projected/ef0903d9-36f7-40fd-a9ef-5688e7030688-kube-api-access-s6tkf\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.035814 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-config\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.035849 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.042619 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.059004 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5zrrd" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.073721 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.080510 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.086692 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.087128 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.096109 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138016 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kxl\" (UniqueName: \"kubernetes.io/projected/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-kube-api-access-d8kxl\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138100 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138177 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138222 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-logs\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138355 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138419 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-config\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138448 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6tkf\" (UniqueName: \"kubernetes.io/projected/ef0903d9-36f7-40fd-a9ef-5688e7030688-kube-api-access-s6tkf\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138498 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138604 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138747 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138910 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.138971 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.139066 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.139103 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.139669 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-config\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.139853 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.140344 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.141072 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.143350 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.148938 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.163912 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6tkf\" (UniqueName: \"kubernetes.io/projected/ef0903d9-36f7-40fd-a9ef-5688e7030688-kube-api-access-s6tkf\") pod \"dnsmasq-dns-5dc4fcdbc-vjc2j\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.226557 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241369 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241417 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241461 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241486 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241509 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241534 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kxl\" (UniqueName: \"kubernetes.io/projected/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-kube-api-access-d8kxl\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241552 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241572 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241598 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-logs\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241629 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241646 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241712 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241777 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241860 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241881 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59kr\" (UniqueName: \"kubernetes.io/projected/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-kube-api-access-h59kr\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.241901 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.242221 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.242867 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.243430 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-logs\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.257278 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.265396 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.267548 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kxl\" (UniqueName: \"kubernetes.io/projected/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-kube-api-access-d8kxl\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.269074 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.269113 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.273570 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.276727 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.327855 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344441 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344517 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344546 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344621 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344652 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344673 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344736 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.344765 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59kr\" (UniqueName: \"kubernetes.io/projected/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-kube-api-access-h59kr\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.345595 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.347499 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.348026 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.351978 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.352086 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.360793 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.361122 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.372556 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59kr\" (UniqueName: \"kubernetes.io/projected/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-kube-api-access-h59kr\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.396076 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.449353 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x9wpl"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.449429 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.623876 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-tk4sv"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.644334 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5zvl8"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.655909 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.773518 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5zrrd"] Jan 22 09:27:55 crc kubenswrapper[4892]: I0122 09:27:55.810027 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7gn6t"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.020545 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b6778c569-gt8dk"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.032691 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75dc8c68f7-bpxpg"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.041844 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-vjc2j"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.055437 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lnmlq"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.114271 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:27:56 crc kubenswrapper[4892]: W0122 09:27:56.500925 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9f603c0_01ef_4cdc_a784_64187d0d3b9e.slice/crio-4f2da7c7525cda6b67fa0ab21a74898808b6e565bf188deebc8c19fae7e0db38 WatchSource:0}: Error finding container 4f2da7c7525cda6b67fa0ab21a74898808b6e565bf188deebc8c19fae7e0db38: Status 404 returned error can't find the container with id 4f2da7c7525cda6b67fa0ab21a74898808b6e565bf188deebc8c19fae7e0db38 Jan 22 09:27:56 crc kubenswrapper[4892]: W0122 09:27:56.519070 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02e1363_2043_4097_bda0_012158a0bf56.slice/crio-dfee83298b4e9f748aa712dbf1e331f085cb40c4fab4178d42abd96fb24175a1 WatchSource:0}: Error finding container dfee83298b4e9f748aa712dbf1e331f085cb40c4fab4178d42abd96fb24175a1: Status 404 returned error can't find the container with id dfee83298b4e9f748aa712dbf1e331f085cb40c4fab4178d42abd96fb24175a1 Jan 22 09:27:56 crc kubenswrapper[4892]: W0122 09:27:56.530894 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef0903d9_36f7_40fd_a9ef_5688e7030688.slice/crio-eb2e504baa4f2ae79e9fa37201edf21b87a64b47f6d03eeb7e9d10a6185d0902 WatchSource:0}: Error finding container eb2e504baa4f2ae79e9fa37201edf21b87a64b47f6d03eeb7e9d10a6185d0902: Status 404 returned error can't find the container with id eb2e504baa4f2ae79e9fa37201edf21b87a64b47f6d03eeb7e9d10a6185d0902 Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.817769 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.892453 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6778c569-gt8dk"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.936319 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.960575 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7db6cccc7c-d7w6r"] Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.962222 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:56 crc kubenswrapper[4892]: I0122 09:27:56.974607 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7db6cccc7c-d7w6r"] Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.005226 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.016628 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5f9eb4e-41c6-45dd-8e09-69d0370a5504","Type":"ContainerStarted","Data":"5302ed0448ed49e5af028525a3538b59b06ba420051c8b2bf0fe7924b754319f"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.019269 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-scripts\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.019417 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b473db-e057-4d55-b3e6-171e8618722f-logs\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.019472 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-config-data\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.019494 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqhps\" (UniqueName: \"kubernetes.io/projected/c3b473db-e057-4d55-b3e6-171e8618722f-kube-api-access-kqhps\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.019531 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c3b473db-e057-4d55-b3e6-171e8618722f-horizon-secret-key\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.022022 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5zrrd" event={"ID":"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4","Type":"ContainerStarted","Data":"c50c67fcf3be20647810d6fd0e4fdb843aaf8cfb66259a8ca71f9064ce261482"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.040135 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gn6t" event={"ID":"8998452c-d0f3-42a2-8741-c70ffe854fda","Type":"ContainerStarted","Data":"0005ce942026eabc6d92729f78d41a6a0b8826c9e10a49dca4032d66ad43446a"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.045018 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dc8c68f7-bpxpg" event={"ID":"eca8ea69-b1df-4f64-b894-d1a33fedef9d","Type":"ContainerStarted","Data":"8df3b20295458f184b31a6dc5a61e592ab63ddb756080bf47c295d9cc731ceaa"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.049567 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerStarted","Data":"7c0318ac928ffd8156754d29d2f32055611528ba4b5714619766c7da822d3d1e"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.056893 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zvl8" event={"ID":"06ba4135-00fc-4891-bad5-e2e666eabd91","Type":"ContainerStarted","Data":"b14661a51e2b4a1757eefb97cce586dc1288f184fa8815866dfd89e58416f7c2"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.065023 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6778c569-gt8dk" event={"ID":"d02e1363-2043-4097-bda0-012158a0bf56","Type":"ContainerStarted","Data":"dfee83298b4e9f748aa712dbf1e331f085cb40c4fab4178d42abd96fb24175a1"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.067245 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lnmlq" event={"ID":"7fa10241-be09-4db5-894b-845654f34a21","Type":"ContainerStarted","Data":"4d11fba568a9b4a1b840e17981766f134e3f4b353fbca67e6ec50cbbbbe146dc"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.068961 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" event={"ID":"ef0903d9-36f7-40fd-a9ef-5688e7030688","Type":"ContainerStarted","Data":"eb2e504baa4f2ae79e9fa37201edf21b87a64b47f6d03eeb7e9d10a6185d0902"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.071548 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" event={"ID":"f9f603c0-01ef-4cdc-a784-64187d0d3b9e","Type":"ContainerStarted","Data":"4f2da7c7525cda6b67fa0ab21a74898808b6e565bf188deebc8c19fae7e0db38"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.074397 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9wpl" event={"ID":"cf83c5a9-ad08-4afb-b3e4-50690745a486","Type":"ContainerStarted","Data":"45dffb5b47d76145cf67754f86a25e9bd9f3f745f73d3f860af7fa88e19ec0cb"} Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.114331 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x9wpl" podStartSLOduration=3.114313819 podStartE2EDuration="3.114313819s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:57.107176657 +0000 UTC m=+1046.951255720" watchObservedRunningTime="2026-01-22 09:27:57.114313819 +0000 UTC m=+1046.958392882" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.120940 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-scripts\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.121144 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b473db-e057-4d55-b3e6-171e8618722f-logs\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.121218 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-config-data\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.121303 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqhps\" (UniqueName: \"kubernetes.io/projected/c3b473db-e057-4d55-b3e6-171e8618722f-kube-api-access-kqhps\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.121403 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c3b473db-e057-4d55-b3e6-171e8618722f-horizon-secret-key\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.124855 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-scripts\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.126648 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b473db-e057-4d55-b3e6-171e8618722f-logs\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.134310 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c3b473db-e057-4d55-b3e6-171e8618722f-horizon-secret-key\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.135046 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-config-data\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.148982 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqhps\" (UniqueName: \"kubernetes.io/projected/c3b473db-e057-4d55-b3e6-171e8618722f-kube-api-access-kqhps\") pod \"horizon-7db6cccc7c-d7w6r\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.151827 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:27:57 crc kubenswrapper[4892]: W0122 09:27:57.256512 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae2e846a_d98e_403f_9a1b_1c3dfdcafc78.slice/crio-41263d2e52aeb5a1744509e167e03ecded831e61ace4d580407284c574af28cf WatchSource:0}: Error finding container 41263d2e52aeb5a1744509e167e03ecded831e61ace4d580407284c574af28cf: Status 404 returned error can't find the container with id 41263d2e52aeb5a1744509e167e03ecded831e61ace4d580407284c574af28cf Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.338082 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:27:57 crc kubenswrapper[4892]: I0122 09:27:57.900376 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7db6cccc7c-d7w6r"] Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.092549 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78","Type":"ContainerStarted","Data":"41263d2e52aeb5a1744509e167e03ecded831e61ace4d580407284c574af28cf"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.105364 4892 generic.go:334] "Generic (PLEG): container finished" podID="f9f603c0-01ef-4cdc-a784-64187d0d3b9e" containerID="8861dba1a7dfb518dbbf37058330687453bc84a104b36238f0ce87450b6a19b2" exitCode=0 Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.105493 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" event={"ID":"f9f603c0-01ef-4cdc-a784-64187d0d3b9e","Type":"ContainerDied","Data":"8861dba1a7dfb518dbbf37058330687453bc84a104b36238f0ce87450b6a19b2"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.120794 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7db6cccc7c-d7w6r" event={"ID":"c3b473db-e057-4d55-b3e6-171e8618722f","Type":"ContainerStarted","Data":"66ba7deb8bf8b0a681b55f0ee5bd4e1f020883dcc6283c970d11f55280ad0bf5"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.128798 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zvl8" event={"ID":"06ba4135-00fc-4891-bad5-e2e666eabd91","Type":"ContainerStarted","Data":"bffd5591c6aea395c0c24ad45e178f44c9380bc6476f6ab41c1de9f5edbfabe7"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.137696 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9wpl" event={"ID":"cf83c5a9-ad08-4afb-b3e4-50690745a486","Type":"ContainerStarted","Data":"7b33f0d6ad40d93289274e89f9df00e01fc4edd767d890cfd75327ef7f0ca2be"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.152189 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5zvl8" podStartSLOduration=4.152134863 podStartE2EDuration="4.152134863s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:58.14951988 +0000 UTC m=+1047.993598943" watchObservedRunningTime="2026-01-22 09:27:58.152134863 +0000 UTC m=+1047.996213916" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.163931 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5f9eb4e-41c6-45dd-8e09-69d0370a5504","Type":"ContainerStarted","Data":"c1bb077fb24f04ff6d76182e7a41e5a728e7a5d3ff35d9285ae04d236c2e6c11"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.172062 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerID="93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de" exitCode=0 Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.172106 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" event={"ID":"ef0903d9-36f7-40fd-a9ef-5688e7030688","Type":"ContainerDied","Data":"93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de"} Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.743086 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.879848 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-swift-storage-0\") pod \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.880111 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-sb\") pod \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.880164 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-nb\") pod \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.880221 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-config\") pod \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.880299 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rtjf\" (UniqueName: \"kubernetes.io/projected/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-kube-api-access-8rtjf\") pod \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.880334 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-svc\") pod \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\" (UID: \"f9f603c0-01ef-4cdc-a784-64187d0d3b9e\") " Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.885385 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-kube-api-access-8rtjf" (OuterVolumeSpecName: "kube-api-access-8rtjf") pod "f9f603c0-01ef-4cdc-a784-64187d0d3b9e" (UID: "f9f603c0-01ef-4cdc-a784-64187d0d3b9e"). InnerVolumeSpecName "kube-api-access-8rtjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.904780 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9f603c0-01ef-4cdc-a784-64187d0d3b9e" (UID: "f9f603c0-01ef-4cdc-a784-64187d0d3b9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.921038 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-config" (OuterVolumeSpecName: "config") pod "f9f603c0-01ef-4cdc-a784-64187d0d3b9e" (UID: "f9f603c0-01ef-4cdc-a784-64187d0d3b9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.922970 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9f603c0-01ef-4cdc-a784-64187d0d3b9e" (UID: "f9f603c0-01ef-4cdc-a784-64187d0d3b9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.928086 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9f603c0-01ef-4cdc-a784-64187d0d3b9e" (UID: "f9f603c0-01ef-4cdc-a784-64187d0d3b9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.937024 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f9f603c0-01ef-4cdc-a784-64187d0d3b9e" (UID: "f9f603c0-01ef-4cdc-a784-64187d0d3b9e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.981970 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.981999 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.982008 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.982017 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.982026 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rtjf\" (UniqueName: \"kubernetes.io/projected/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-kube-api-access-8rtjf\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:58 crc kubenswrapper[4892]: I0122 09:27:58.982036 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9f603c0-01ef-4cdc-a784-64187d0d3b9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.182457 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5f9eb4e-41c6-45dd-8e09-69d0370a5504","Type":"ContainerStarted","Data":"1f8239d4a424c7f45f80e365a1cae98bb144e64b1922913068ff471b1a50e8fa"} Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.182622 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-log" containerID="cri-o://c1bb077fb24f04ff6d76182e7a41e5a728e7a5d3ff35d9285ae04d236c2e6c11" gracePeriod=30 Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.183133 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-httpd" containerID="cri-o://1f8239d4a424c7f45f80e365a1cae98bb144e64b1922913068ff471b1a50e8fa" gracePeriod=30 Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.187044 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" event={"ID":"ef0903d9-36f7-40fd-a9ef-5688e7030688","Type":"ContainerStarted","Data":"a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c"} Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.187502 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.196102 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78","Type":"ContainerStarted","Data":"630bc34d1388698047344bc01c466d4df0eeca17756b334d79b5c2ac6cfb9f77"} Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.203230 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" event={"ID":"f9f603c0-01ef-4cdc-a784-64187d0d3b9e","Type":"ContainerDied","Data":"4f2da7c7525cda6b67fa0ab21a74898808b6e565bf188deebc8c19fae7e0db38"} Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.203294 4892 scope.go:117] "RemoveContainer" containerID="8861dba1a7dfb518dbbf37058330687453bc84a104b36238f0ce87450b6a19b2" Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.203312 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-tk4sv" Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.216320 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.216299539 podStartE2EDuration="5.216299539s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:59.20715981 +0000 UTC m=+1049.051238873" watchObservedRunningTime="2026-01-22 09:27:59.216299539 +0000 UTC m=+1049.060378602" Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.245824 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" podStartSLOduration=5.245806107 podStartE2EDuration="5.245806107s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:27:59.241599246 +0000 UTC m=+1049.085678309" watchObservedRunningTime="2026-01-22 09:27:59.245806107 +0000 UTC m=+1049.089885170" Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.290998 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-tk4sv"] Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.300451 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-tk4sv"] Jan 22 09:27:59 crc kubenswrapper[4892]: I0122 09:27:59.431871 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f603c0-01ef-4cdc-a784-64187d0d3b9e" path="/var/lib/kubelet/pods/f9f603c0-01ef-4cdc-a784-64187d0d3b9e/volumes" Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.214086 4892 generic.go:334] "Generic (PLEG): container finished" podID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerID="1f8239d4a424c7f45f80e365a1cae98bb144e64b1922913068ff471b1a50e8fa" exitCode=0 Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.214395 4892 generic.go:334] "Generic (PLEG): container finished" podID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerID="c1bb077fb24f04ff6d76182e7a41e5a728e7a5d3ff35d9285ae04d236c2e6c11" exitCode=143 Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.214175 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5f9eb4e-41c6-45dd-8e09-69d0370a5504","Type":"ContainerDied","Data":"1f8239d4a424c7f45f80e365a1cae98bb144e64b1922913068ff471b1a50e8fa"} Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.214446 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5f9eb4e-41c6-45dd-8e09-69d0370a5504","Type":"ContainerDied","Data":"c1bb077fb24f04ff6d76182e7a41e5a728e7a5d3ff35d9285ae04d236c2e6c11"} Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.216878 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78","Type":"ContainerStarted","Data":"e7dc7c8e4e893a8048d474362d2857379310ac79fbca59a64e4d5c95263fda37"} Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.217010 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-httpd" containerID="cri-o://e7dc7c8e4e893a8048d474362d2857379310ac79fbca59a64e4d5c95263fda37" gracePeriod=30 Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.216981 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-log" containerID="cri-o://630bc34d1388698047344bc01c466d4df0eeca17756b334d79b5c2ac6cfb9f77" gracePeriod=30 Jan 22 09:28:00 crc kubenswrapper[4892]: I0122 09:28:00.254228 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.2542093770000005 podStartE2EDuration="6.254209377s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:00.247492245 +0000 UTC m=+1050.091571308" watchObservedRunningTime="2026-01-22 09:28:00.254209377 +0000 UTC m=+1050.098288440" Jan 22 09:28:01 crc kubenswrapper[4892]: I0122 09:28:01.236164 4892 generic.go:334] "Generic (PLEG): container finished" podID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerID="e7dc7c8e4e893a8048d474362d2857379310ac79fbca59a64e4d5c95263fda37" exitCode=0 Jan 22 09:28:01 crc kubenswrapper[4892]: I0122 09:28:01.236631 4892 generic.go:334] "Generic (PLEG): container finished" podID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerID="630bc34d1388698047344bc01c466d4df0eeca17756b334d79b5c2ac6cfb9f77" exitCode=143 Jan 22 09:28:01 crc kubenswrapper[4892]: I0122 09:28:01.236271 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78","Type":"ContainerDied","Data":"e7dc7c8e4e893a8048d474362d2857379310ac79fbca59a64e4d5c95263fda37"} Jan 22 09:28:01 crc kubenswrapper[4892]: I0122 09:28:01.236722 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78","Type":"ContainerDied","Data":"630bc34d1388698047344bc01c466d4df0eeca17756b334d79b5c2ac6cfb9f77"} Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.248751 4892 generic.go:334] "Generic (PLEG): container finished" podID="cf83c5a9-ad08-4afb-b3e4-50690745a486" containerID="7b33f0d6ad40d93289274e89f9df00e01fc4edd767d890cfd75327ef7f0ca2be" exitCode=0 Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.248829 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9wpl" event={"ID":"cf83c5a9-ad08-4afb-b3e4-50690745a486","Type":"ContainerDied","Data":"7b33f0d6ad40d93289274e89f9df00e01fc4edd767d890cfd75327ef7f0ca2be"} Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.253829 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5f9eb4e-41c6-45dd-8e09-69d0370a5504","Type":"ContainerDied","Data":"5302ed0448ed49e5af028525a3538b59b06ba420051c8b2bf0fe7924b754319f"} Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.253887 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5302ed0448ed49e5af028525a3538b59b06ba420051c8b2bf0fe7924b754319f" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.290456 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.354522 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8kxl\" (UniqueName: \"kubernetes.io/projected/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-kube-api-access-d8kxl\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.354574 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-public-tls-certs\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.354629 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.354670 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-config-data\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.354777 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-httpd-run\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.354814 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-combined-ca-bundle\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.355666 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-logs\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.355755 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-scripts\") pod \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\" (UID: \"e5f9eb4e-41c6-45dd-8e09-69d0370a5504\") " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.355865 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.356109 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-logs" (OuterVolumeSpecName: "logs") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.356497 4892 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.356514 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.362267 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-scripts" (OuterVolumeSpecName: "scripts") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.363203 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.391047 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-kube-api-access-d8kxl" (OuterVolumeSpecName: "kube-api-access-d8kxl") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "kube-api-access-d8kxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.400272 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.417467 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.425278 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-config-data" (OuterVolumeSpecName: "config-data") pod "e5f9eb4e-41c6-45dd-8e09-69d0370a5504" (UID: "e5f9eb4e-41c6-45dd-8e09-69d0370a5504"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.459109 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.459140 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8kxl\" (UniqueName: \"kubernetes.io/projected/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-kube-api-access-d8kxl\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.459153 4892 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.459175 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.459184 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.459193 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f9eb4e-41c6-45dd-8e09-69d0370a5504-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.477017 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 22 09:28:02 crc kubenswrapper[4892]: I0122 09:28:02.560615 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.262625 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.305346 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.320811 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.332841 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:03 crc kubenswrapper[4892]: E0122 09:28:03.333396 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f603c0-01ef-4cdc-a784-64187d0d3b9e" containerName="init" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.333412 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f603c0-01ef-4cdc-a784-64187d0d3b9e" containerName="init" Jan 22 09:28:03 crc kubenswrapper[4892]: E0122 09:28:03.333445 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-log" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.333453 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-log" Jan 22 09:28:03 crc kubenswrapper[4892]: E0122 09:28:03.333477 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-httpd" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.333484 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-httpd" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.333678 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-httpd" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.333699 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" containerName="glance-log" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.333716 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f603c0-01ef-4cdc-a784-64187d0d3b9e" containerName="init" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.334667 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.339657 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.340413 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.342852 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382562 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382643 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382666 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382685 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382724 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382755 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-logs\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382782 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqlt\" (UniqueName: \"kubernetes.io/projected/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-kube-api-access-9fqlt\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.382802 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.429953 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f9eb4e-41c6-45dd-8e09-69d0370a5504" path="/var/lib/kubelet/pods/e5f9eb4e-41c6-45dd-8e09-69d0370a5504/volumes" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489656 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489730 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-logs\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489786 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqlt\" (UniqueName: \"kubernetes.io/projected/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-kube-api-access-9fqlt\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489809 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489877 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489960 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489980 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.489997 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.492499 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.494786 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-logs\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.499586 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.499984 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.500218 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.501159 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.501652 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.521251 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqlt\" (UniqueName: \"kubernetes.io/projected/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-kube-api-access-9fqlt\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.565223 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.659484 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.803218 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75dc8c68f7-bpxpg"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.830074 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79777d5484-zk25q"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.836241 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.839905 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.844930 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79777d5484-zk25q"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912500 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c837fca4-ae2c-43fd-850c-f2aca8331d27-logs\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912564 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-combined-ca-bundle\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912607 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-scripts\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912660 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-config-data\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912843 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-secret-key\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912919 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6km\" (UniqueName: \"kubernetes.io/projected/c837fca4-ae2c-43fd-850c-f2aca8331d27-kube-api-access-bq6km\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.912941 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-tls-certs\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.914640 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7db6cccc7c-d7w6r"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.960665 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bd8749ddb-x9h4l"] Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.971410 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:03 crc kubenswrapper[4892]: I0122 09:28:03.985384 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bd8749ddb-x9h4l"] Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.005548 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.016428 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-secret-key\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.016725 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwft\" (UniqueName: \"kubernetes.io/projected/a434b179-017a-4112-a673-1859114a62ed-kube-api-access-ntwft\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.016863 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq6km\" (UniqueName: \"kubernetes.io/projected/c837fca4-ae2c-43fd-850c-f2aca8331d27-kube-api-access-bq6km\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.016967 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a434b179-017a-4112-a673-1859114a62ed-config-data\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017104 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-tls-certs\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017253 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-combined-ca-bundle\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017445 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a434b179-017a-4112-a673-1859114a62ed-scripts\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017604 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c837fca4-ae2c-43fd-850c-f2aca8331d27-logs\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017683 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-horizon-secret-key\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017777 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-combined-ca-bundle\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.017890 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-scripts\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.018006 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-config-data\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.018104 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-horizon-tls-certs\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.018168 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a434b179-017a-4112-a673-1859114a62ed-logs\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.018168 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c837fca4-ae2c-43fd-850c-f2aca8331d27-logs\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.019138 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-scripts\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.019805 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-secret-key\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.021650 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-config-data\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.023037 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-combined-ca-bundle\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.033813 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-tls-certs\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.036175 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq6km\" (UniqueName: \"kubernetes.io/projected/c837fca4-ae2c-43fd-850c-f2aca8331d27-kube-api-access-bq6km\") pod \"horizon-79777d5484-zk25q\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119583 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntwft\" (UniqueName: \"kubernetes.io/projected/a434b179-017a-4112-a673-1859114a62ed-kube-api-access-ntwft\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119647 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a434b179-017a-4112-a673-1859114a62ed-config-data\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119675 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-combined-ca-bundle\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119699 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a434b179-017a-4112-a673-1859114a62ed-scripts\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119727 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-horizon-secret-key\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119793 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-horizon-tls-certs\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.119818 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a434b179-017a-4112-a673-1859114a62ed-logs\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.120240 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a434b179-017a-4112-a673-1859114a62ed-logs\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.120795 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a434b179-017a-4112-a673-1859114a62ed-scripts\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.122583 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a434b179-017a-4112-a673-1859114a62ed-config-data\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.124196 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-combined-ca-bundle\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.124586 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-horizon-tls-certs\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.132934 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a434b179-017a-4112-a673-1859114a62ed-horizon-secret-key\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.135817 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntwft\" (UniqueName: \"kubernetes.io/projected/a434b179-017a-4112-a673-1859114a62ed-kube-api-access-ntwft\") pod \"horizon-7bd8749ddb-x9h4l\" (UID: \"a434b179-017a-4112-a673-1859114a62ed\") " pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.184684 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:04 crc kubenswrapper[4892]: I0122 09:28:04.304777 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:05 crc kubenswrapper[4892]: I0122 09:28:05.268411 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:28:05 crc kubenswrapper[4892]: I0122 09:28:05.322055 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-mgwb6"] Jan 22 09:28:05 crc kubenswrapper[4892]: I0122 09:28:05.322317 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="dnsmasq-dns" containerID="cri-o://b167de06fa93b0c04f3a1410f762b566ffbef6a7e9bf3753205e8d4ef27e9fa2" gracePeriod=10 Jan 22 09:28:06 crc kubenswrapper[4892]: I0122 09:28:06.293433 4892 generic.go:334] "Generic (PLEG): container finished" podID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerID="b167de06fa93b0c04f3a1410f762b566ffbef6a7e9bf3753205e8d4ef27e9fa2" exitCode=0 Jan 22 09:28:06 crc kubenswrapper[4892]: I0122 09:28:06.293514 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" event={"ID":"7ca36c0e-2948-479f-88f8-f3ccf747bafd","Type":"ContainerDied","Data":"b167de06fa93b0c04f3a1410f762b566ffbef6a7e9bf3753205e8d4ef27e9fa2"} Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.115749 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.245361 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmz2\" (UniqueName: \"kubernetes.io/projected/cf83c5a9-ad08-4afb-b3e4-50690745a486-kube-api-access-9tmz2\") pod \"cf83c5a9-ad08-4afb-b3e4-50690745a486\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.245491 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-config-data\") pod \"cf83c5a9-ad08-4afb-b3e4-50690745a486\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.245537 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-credential-keys\") pod \"cf83c5a9-ad08-4afb-b3e4-50690745a486\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.245695 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-combined-ca-bundle\") pod \"cf83c5a9-ad08-4afb-b3e4-50690745a486\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.245741 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-scripts\") pod \"cf83c5a9-ad08-4afb-b3e4-50690745a486\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.245787 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-fernet-keys\") pod \"cf83c5a9-ad08-4afb-b3e4-50690745a486\" (UID: \"cf83c5a9-ad08-4afb-b3e4-50690745a486\") " Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.253938 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cf83c5a9-ad08-4afb-b3e4-50690745a486" (UID: "cf83c5a9-ad08-4afb-b3e4-50690745a486"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.254055 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-scripts" (OuterVolumeSpecName: "scripts") pod "cf83c5a9-ad08-4afb-b3e4-50690745a486" (UID: "cf83c5a9-ad08-4afb-b3e4-50690745a486"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.257397 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf83c5a9-ad08-4afb-b3e4-50690745a486-kube-api-access-9tmz2" (OuterVolumeSpecName: "kube-api-access-9tmz2") pod "cf83c5a9-ad08-4afb-b3e4-50690745a486" (UID: "cf83c5a9-ad08-4afb-b3e4-50690745a486"). InnerVolumeSpecName "kube-api-access-9tmz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.278296 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-config-data" (OuterVolumeSpecName: "config-data") pod "cf83c5a9-ad08-4afb-b3e4-50690745a486" (UID: "cf83c5a9-ad08-4afb-b3e4-50690745a486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.286454 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cf83c5a9-ad08-4afb-b3e4-50690745a486" (UID: "cf83c5a9-ad08-4afb-b3e4-50690745a486"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.287519 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf83c5a9-ad08-4afb-b3e4-50690745a486" (UID: "cf83c5a9-ad08-4afb-b3e4-50690745a486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.313517 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9wpl" event={"ID":"cf83c5a9-ad08-4afb-b3e4-50690745a486","Type":"ContainerDied","Data":"45dffb5b47d76145cf67754f86a25e9bd9f3f745f73d3f860af7fa88e19ec0cb"} Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.313556 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9wpl" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.313566 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45dffb5b47d76145cf67754f86a25e9bd9f3f745f73d3f860af7fa88e19ec0cb" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.349965 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.350009 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.350020 4892 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.350032 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmz2\" (UniqueName: \"kubernetes.io/projected/cf83c5a9-ad08-4afb-b3e4-50690745a486-kube-api-access-9tmz2\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.350044 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:08 crc kubenswrapper[4892]: I0122 09:28:08.350054 4892 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf83c5a9-ad08-4afb-b3e4-50690745a486-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.198009 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x9wpl"] Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.206093 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x9wpl"] Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.294177 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lh52x"] Jan 22 09:28:09 crc kubenswrapper[4892]: E0122 09:28:09.294723 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf83c5a9-ad08-4afb-b3e4-50690745a486" containerName="keystone-bootstrap" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.294784 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf83c5a9-ad08-4afb-b3e4-50690745a486" containerName="keystone-bootstrap" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.294985 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf83c5a9-ad08-4afb-b3e4-50690745a486" containerName="keystone-bootstrap" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.295933 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.299184 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.299420 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.299305 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sdhhd" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.299713 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.299896 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.315748 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lh52x"] Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.374252 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69g8\" (UniqueName: \"kubernetes.io/projected/892a47e9-2f83-4902-a210-3b23d56ad662-kube-api-access-n69g8\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.374602 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-credential-keys\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.374719 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-combined-ca-bundle\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.374912 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-config-data\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.375047 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-fernet-keys\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.375173 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-scripts\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.431810 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf83c5a9-ad08-4afb-b3e4-50690745a486" path="/var/lib/kubelet/pods/cf83c5a9-ad08-4afb-b3e4-50690745a486/volumes" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.476525 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69g8\" (UniqueName: \"kubernetes.io/projected/892a47e9-2f83-4902-a210-3b23d56ad662-kube-api-access-n69g8\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.476586 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-credential-keys\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.476630 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-combined-ca-bundle\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.476742 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-config-data\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.476793 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-fernet-keys\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.476835 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-scripts\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.481422 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-combined-ca-bundle\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.481769 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-fernet-keys\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.481897 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-credential-keys\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.485405 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-scripts\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.487431 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-config-data\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.496689 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69g8\" (UniqueName: \"kubernetes.io/projected/892a47e9-2f83-4902-a210-3b23d56ad662-kube-api-access-n69g8\") pod \"keystone-bootstrap-lh52x\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:09 crc kubenswrapper[4892]: I0122 09:28:09.618926 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:14 crc kubenswrapper[4892]: I0122 09:28:14.366108 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.365326 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.373003 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.392439 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" event={"ID":"7ca36c0e-2948-479f-88f8-f3ccf747bafd","Type":"ContainerDied","Data":"7880984785d525f9701b531312c844e14a445c47ad4dda1800d07feba3bcabd2"} Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.392488 4892 scope.go:117] "RemoveContainer" containerID="b167de06fa93b0c04f3a1410f762b566ffbef6a7e9bf3753205e8d4ef27e9fa2" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.392654 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.400936 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78","Type":"ContainerDied","Data":"41263d2e52aeb5a1744509e167e03ecded831e61ace4d580407284c574af28cf"} Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.400994 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463466 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463569 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-combined-ca-bundle\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463614 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-swift-storage-0\") pod \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463725 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-sb\") pod \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463750 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h59kr\" (UniqueName: \"kubernetes.io/projected/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-kube-api-access-h59kr\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463811 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstm8\" (UniqueName: \"kubernetes.io/projected/7ca36c0e-2948-479f-88f8-f3ccf747bafd-kube-api-access-bstm8\") pod \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463838 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-logs\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463869 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-svc\") pod \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463894 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-config-data\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463934 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-internal-tls-certs\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463960 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-scripts\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.463999 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-config\") pod \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.464021 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-nb\") pod \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\" (UID: \"7ca36c0e-2948-479f-88f8-f3ccf747bafd\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.464062 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-httpd-run\") pod \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\" (UID: \"ae2e846a-d98e-403f-9a1b-1c3dfdcafc78\") " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.464190 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-logs" (OuterVolumeSpecName: "logs") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.464491 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.464645 4892 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.464671 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.469372 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-kube-api-access-h59kr" (OuterVolumeSpecName: "kube-api-access-h59kr") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "kube-api-access-h59kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.469635 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.470304 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca36c0e-2948-479f-88f8-f3ccf747bafd-kube-api-access-bstm8" (OuterVolumeSpecName: "kube-api-access-bstm8") pod "7ca36c0e-2948-479f-88f8-f3ccf747bafd" (UID: "7ca36c0e-2948-479f-88f8-f3ccf747bafd"). InnerVolumeSpecName "kube-api-access-bstm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.472877 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-scripts" (OuterVolumeSpecName: "scripts") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.509234 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.528012 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ca36c0e-2948-479f-88f8-f3ccf747bafd" (UID: "7ca36c0e-2948-479f-88f8-f3ccf747bafd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.528041 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-config" (OuterVolumeSpecName: "config") pod "7ca36c0e-2948-479f-88f8-f3ccf747bafd" (UID: "7ca36c0e-2948-479f-88f8-f3ccf747bafd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.528143 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-config-data" (OuterVolumeSpecName: "config-data") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.528557 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ca36c0e-2948-479f-88f8-f3ccf747bafd" (UID: "7ca36c0e-2948-479f-88f8-f3ccf747bafd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.532687 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ca36c0e-2948-479f-88f8-f3ccf747bafd" (UID: "7ca36c0e-2948-479f-88f8-f3ccf747bafd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.533528 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ca36c0e-2948-479f-88f8-f3ccf747bafd" (UID: "7ca36c0e-2948-479f-88f8-f3ccf747bafd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.538045 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" (UID: "ae2e846a-d98e-403f-9a1b-1c3dfdcafc78"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566401 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566431 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566440 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566449 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h59kr\" (UniqueName: \"kubernetes.io/projected/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-kube-api-access-h59kr\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566460 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstm8\" (UniqueName: \"kubernetes.io/projected/7ca36c0e-2948-479f-88f8-f3ccf747bafd-kube-api-access-bstm8\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566469 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566477 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566485 4892 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566493 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566501 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566510 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca36c0e-2948-479f-88f8-f3ccf747bafd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.566540 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.586067 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.672611 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.727759 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-mgwb6"] Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.741437 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-mgwb6"] Jan 22 09:28:18 crc kubenswrapper[4892]: E0122 09:28:18.742684 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 22 09:28:18 crc kubenswrapper[4892]: E0122 09:28:18.742849 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57h557h59ch58fh664h99h68ch56bh7bh96h644h5dfhdbh556hcbhb4h54fhfch67ch5bbh8ch559h54ch67bh54h557h647h7ch5c6h5cfh658h5cfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68sl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0b1b8dc7-a488-447d-8c4a-21119f3e3dd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.755889 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.763915 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786067 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:28:18 crc kubenswrapper[4892]: E0122 09:28:18.786414 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-log" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786430 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-log" Jan 22 09:28:18 crc kubenswrapper[4892]: E0122 09:28:18.786448 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="dnsmasq-dns" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786456 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="dnsmasq-dns" Jan 22 09:28:18 crc kubenswrapper[4892]: E0122 09:28:18.786477 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-httpd" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786483 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-httpd" Jan 22 09:28:18 crc kubenswrapper[4892]: E0122 09:28:18.786492 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="init" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786498 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="init" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786672 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-log" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786685 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="dnsmasq-dns" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.786696 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" containerName="glance-httpd" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.787629 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.789179 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.789784 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.793179 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976729 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976788 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs7r4\" (UniqueName: \"kubernetes.io/projected/8ac9e125-77fe-4415-b124-bdd6816b313d-kube-api-access-gs7r4\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976825 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976870 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976907 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976939 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.976969 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:18 crc kubenswrapper[4892]: I0122 09:28:18.977032 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079074 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079157 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079213 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079258 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079330 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079364 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079419 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079460 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs7r4\" (UniqueName: \"kubernetes.io/projected/8ac9e125-77fe-4415-b124-bdd6816b313d-kube-api-access-gs7r4\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.079992 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.080175 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.080217 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.083728 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.083790 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.083830 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.085908 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.100953 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs7r4\" (UniqueName: \"kubernetes.io/projected/8ac9e125-77fe-4415-b124-bdd6816b313d-kube-api-access-gs7r4\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.103206 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.110668 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.366991 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-mgwb6" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.431478 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca36c0e-2948-479f-88f8-f3ccf747bafd" path="/var/lib/kubelet/pods/7ca36c0e-2948-479f-88f8-f3ccf747bafd/volumes" Jan 22 09:28:19 crc kubenswrapper[4892]: I0122 09:28:19.432187 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2e846a-d98e-403f-9a1b-1c3dfdcafc78" path="/var/lib/kubelet/pods/ae2e846a-d98e-403f-9a1b-1c3dfdcafc78/volumes" Jan 22 09:28:19 crc kubenswrapper[4892]: E0122 09:28:19.488206 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 22 09:28:19 crc kubenswrapper[4892]: E0122 09:28:19.488456 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8jp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lnmlq_openstack(7fa10241-be09-4db5-894b-845654f34a21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:28:19 crc kubenswrapper[4892]: E0122 09:28:19.489714 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lnmlq" podUID="7fa10241-be09-4db5-894b-845654f34a21" Jan 22 09:28:20 crc kubenswrapper[4892]: E0122 09:28:20.417514 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-lnmlq" podUID="7fa10241-be09-4db5-894b-845654f34a21" Jan 22 09:28:20 crc kubenswrapper[4892]: E0122 09:28:20.730578 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 22 09:28:20 crc kubenswrapper[4892]: E0122 09:28:20.730757 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f8xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7gn6t_openstack(8998452c-d0f3-42a2-8741-c70ffe854fda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:28:20 crc kubenswrapper[4892]: E0122 09:28:20.732277 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7gn6t" podUID="8998452c-d0f3-42a2-8741-c70ffe854fda" Jan 22 09:28:20 crc kubenswrapper[4892]: I0122 09:28:20.802854 4892 scope.go:117] "RemoveContainer" containerID="4ab094d9cbb4d372e5035e41bf67a4c249e124956ec1719b6bf030f44d2460b8" Jan 22 09:28:20 crc kubenswrapper[4892]: I0122 09:28:20.932795 4892 scope.go:117] "RemoveContainer" containerID="e7dc7c8e4e893a8048d474362d2857379310ac79fbca59a64e4d5c95263fda37" Jan 22 09:28:20 crc kubenswrapper[4892]: I0122 09:28:20.969250 4892 scope.go:117] "RemoveContainer" containerID="630bc34d1388698047344bc01c466d4df0eeca17756b334d79b5c2ac6cfb9f77" Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.349915 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lh52x"] Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.407781 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bd8749ddb-x9h4l"] Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.496184 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79777d5484-zk25q"] Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.519777 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5zrrd" event={"ID":"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4","Type":"ContainerStarted","Data":"e6bdf9989110bdcbe7493a35733771820dfed07321d77d9cce02845921687cf7"} Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.540640 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dc8c68f7-bpxpg" event={"ID":"eca8ea69-b1df-4f64-b894-d1a33fedef9d","Type":"ContainerStarted","Data":"f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27"} Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.540656 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75dc8c68f7-bpxpg" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon-log" containerID="cri-o://ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2" gracePeriod=30 Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.540688 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dc8c68f7-bpxpg" event={"ID":"eca8ea69-b1df-4f64-b894-d1a33fedef9d","Type":"ContainerStarted","Data":"ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2"} Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.540744 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75dc8c68f7-bpxpg" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon" containerID="cri-o://f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27" gracePeriod=30 Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.555188 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7db6cccc7c-d7w6r" event={"ID":"c3b473db-e057-4d55-b3e6-171e8618722f","Type":"ContainerStarted","Data":"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468"} Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.557496 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lh52x" event={"ID":"892a47e9-2f83-4902-a210-3b23d56ad662","Type":"ContainerStarted","Data":"33ebc919dbb0f3b312684d5838d6f106af9eabdb5a35b987bf8949b3fc8a875e"} Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.566573 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6778c569-gt8dk" event={"ID":"d02e1363-2043-4097-bda0-012158a0bf56","Type":"ContainerStarted","Data":"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d"} Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.566736 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b6778c569-gt8dk" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon-log" containerID="cri-o://4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d" gracePeriod=30 Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.566839 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b6778c569-gt8dk" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon" containerID="cri-o://fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5" gracePeriod=30 Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.576168 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5zrrd" podStartSLOduration=5.802139596 podStartE2EDuration="27.576148932s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="2026-01-22 09:27:56.500853398 +0000 UTC m=+1046.344932471" lastFinishedPulling="2026-01-22 09:28:18.274862744 +0000 UTC m=+1068.118941807" observedRunningTime="2026-01-22 09:28:21.549630306 +0000 UTC m=+1071.393709369" watchObservedRunningTime="2026-01-22 09:28:21.576148932 +0000 UTC m=+1071.420227995" Jan 22 09:28:21 crc kubenswrapper[4892]: E0122 09:28:21.588540 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-7gn6t" podUID="8998452c-d0f3-42a2-8741-c70ffe854fda" Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.604576 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75dc8c68f7-bpxpg" podStartSLOduration=5.402660517 podStartE2EDuration="27.604558843s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="2026-01-22 09:27:56.518922922 +0000 UTC m=+1046.363001995" lastFinishedPulling="2026-01-22 09:28:18.720821248 +0000 UTC m=+1068.564900321" observedRunningTime="2026-01-22 09:28:21.570506647 +0000 UTC m=+1071.414585710" watchObservedRunningTime="2026-01-22 09:28:21.604558843 +0000 UTC m=+1071.448637906" Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.610743 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b6778c569-gt8dk" podStartSLOduration=3.32625773 podStartE2EDuration="27.610726871s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="2026-01-22 09:27:56.525156581 +0000 UTC m=+1046.369235644" lastFinishedPulling="2026-01-22 09:28:20.809625722 +0000 UTC m=+1070.653704785" observedRunningTime="2026-01-22 09:28:21.599628855 +0000 UTC m=+1071.443707928" watchObservedRunningTime="2026-01-22 09:28:21.610726871 +0000 UTC m=+1071.454805934" Jan 22 09:28:21 crc kubenswrapper[4892]: I0122 09:28:21.684061 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:22 crc kubenswrapper[4892]: W0122 09:28:22.012574 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4a1ded_1ebf_4e2c_ac57_572f16759b0d.slice/crio-e525d58f9abc9d667bd5706f420c4ddfa0c9513b8b05eeea2b8aa922eff8337f WatchSource:0}: Error finding container e525d58f9abc9d667bd5706f420c4ddfa0c9513b8b05eeea2b8aa922eff8337f: Status 404 returned error can't find the container with id e525d58f9abc9d667bd5706f420c4ddfa0c9513b8b05eeea2b8aa922eff8337f Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.605397 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6778c569-gt8dk" event={"ID":"d02e1363-2043-4097-bda0-012158a0bf56","Type":"ContainerStarted","Data":"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.614812 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79777d5484-zk25q" event={"ID":"c837fca4-ae2c-43fd-850c-f2aca8331d27","Type":"ContainerStarted","Data":"a517a1366ff75b3114aba08a5cd170cff1a9a46111a0373888655bd0a308fa5e"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.614858 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79777d5484-zk25q" event={"ID":"c837fca4-ae2c-43fd-850c-f2aca8331d27","Type":"ContainerStarted","Data":"29a40a09756cba7b3751e000919e2a9027e341f2b2d60abf1bab59b34c64bafb"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.614870 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79777d5484-zk25q" event={"ID":"c837fca4-ae2c-43fd-850c-f2aca8331d27","Type":"ContainerStarted","Data":"007f9513e989fb532fef7d363ef4362fac0b90a50ff7a3e628f1010a985c6af1"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.626584 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.628434 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerStarted","Data":"d6b4ec33cb48d062ea41f639919a5b3e17d94fe4ffe83d9b4bde6b7be77d52cd"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.645049 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7db6cccc7c-d7w6r" event={"ID":"c3b473db-e057-4d55-b3e6-171e8618722f","Type":"ContainerStarted","Data":"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.645198 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7db6cccc7c-d7w6r" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon-log" containerID="cri-o://a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468" gracePeriod=30 Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.645545 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7db6cccc7c-d7w6r" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon" containerID="cri-o://bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9" gracePeriod=30 Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.647056 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79777d5484-zk25q" podStartSLOduration=19.647028769 podStartE2EDuration="19.647028769s" podCreationTimestamp="2026-01-22 09:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:22.638657878 +0000 UTC m=+1072.482736941" watchObservedRunningTime="2026-01-22 09:28:22.647028769 +0000 UTC m=+1072.491107832" Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.655159 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d","Type":"ContainerStarted","Data":"e525d58f9abc9d667bd5706f420c4ddfa0c9513b8b05eeea2b8aa922eff8337f"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.662959 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lh52x" event={"ID":"892a47e9-2f83-4902-a210-3b23d56ad662","Type":"ContainerStarted","Data":"5332bf474020455b8e38ff2cd59ac3697b1a724d1e3d2db8fce8bbe04bc8f323"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.669552 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7db6cccc7c-d7w6r" podStartSLOduration=3.809703697 podStartE2EDuration="26.669535909s" podCreationTimestamp="2026-01-22 09:27:56 +0000 UTC" firstStartedPulling="2026-01-22 09:27:57.959071503 +0000 UTC m=+1047.803150566" lastFinishedPulling="2026-01-22 09:28:20.818903715 +0000 UTC m=+1070.662982778" observedRunningTime="2026-01-22 09:28:22.666145847 +0000 UTC m=+1072.510224900" watchObservedRunningTime="2026-01-22 09:28:22.669535909 +0000 UTC m=+1072.513614972" Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.671681 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd8749ddb-x9h4l" event={"ID":"a434b179-017a-4112-a673-1859114a62ed","Type":"ContainerStarted","Data":"306a2d2a28821402a60dcf5a6b835440b9dc05be9c3da54491267bc27cb32df5"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.671724 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd8749ddb-x9h4l" event={"ID":"a434b179-017a-4112-a673-1859114a62ed","Type":"ContainerStarted","Data":"b3fecb88cad04876f21c73f127536a019ae1c4782c997dac28742d57e5c09dc1"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.671736 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd8749ddb-x9h4l" event={"ID":"a434b179-017a-4112-a673-1859114a62ed","Type":"ContainerStarted","Data":"f1b228c51146a62931d915284a2a5c41f3370fab9b9cdf59533f787bb236ec1e"} Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.683738 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lh52x" podStartSLOduration=13.683721929 podStartE2EDuration="13.683721929s" podCreationTimestamp="2026-01-22 09:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:22.680151513 +0000 UTC m=+1072.524230576" watchObservedRunningTime="2026-01-22 09:28:22.683721929 +0000 UTC m=+1072.527800992" Jan 22 09:28:22 crc kubenswrapper[4892]: I0122 09:28:22.721188 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bd8749ddb-x9h4l" podStartSLOduration=19.721150746 podStartE2EDuration="19.721150746s" podCreationTimestamp="2026-01-22 09:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:22.714599199 +0000 UTC m=+1072.558678262" watchObservedRunningTime="2026-01-22 09:28:22.721150746 +0000 UTC m=+1072.565229809" Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.705936 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ac9e125-77fe-4415-b124-bdd6816b313d","Type":"ContainerStarted","Data":"59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5"} Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.706554 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ac9e125-77fe-4415-b124-bdd6816b313d","Type":"ContainerStarted","Data":"31b18a27d1bbfed4fc5ec923aa89ee70ec23d7126b5524823c4595a30e578221"} Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.729891 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-log" containerID="cri-o://358784be3d8422c2079018a1883275f30509a59e462e9b9cb55f9f8bbc6d3d88" gracePeriod=30 Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.730157 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d","Type":"ContainerStarted","Data":"1ea9d7ef1c10405966222d9c80d9b6c9b872ab221d4b312146dd0455fb6b9c84"} Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.730185 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d","Type":"ContainerStarted","Data":"358784be3d8422c2079018a1883275f30509a59e462e9b9cb55f9f8bbc6d3d88"} Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.731247 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-httpd" containerID="cri-o://1ea9d7ef1c10405966222d9c80d9b6c9b872ab221d4b312146dd0455fb6b9c84" gracePeriod=30 Jan 22 09:28:23 crc kubenswrapper[4892]: I0122 09:28:23.757394 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.757378883 podStartE2EDuration="20.757378883s" podCreationTimestamp="2026-01-22 09:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:23.755616121 +0000 UTC m=+1073.599695174" watchObservedRunningTime="2026-01-22 09:28:23.757378883 +0000 UTC m=+1073.601457956" Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.186182 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.186544 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.316423 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.316485 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.658792 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.743150 4892 generic.go:334] "Generic (PLEG): container finished" podID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerID="358784be3d8422c2079018a1883275f30509a59e462e9b9cb55f9f8bbc6d3d88" exitCode=143 Jan 22 09:28:24 crc kubenswrapper[4892]: I0122 09:28:24.743224 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d","Type":"ContainerDied","Data":"358784be3d8422c2079018a1883275f30509a59e462e9b9cb55f9f8bbc6d3d88"} Jan 22 09:28:25 crc kubenswrapper[4892]: I0122 09:28:25.150435 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:28:25 crc kubenswrapper[4892]: I0122 09:28:25.756125 4892 generic.go:334] "Generic (PLEG): container finished" podID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerID="1ea9d7ef1c10405966222d9c80d9b6c9b872ab221d4b312146dd0455fb6b9c84" exitCode=143 Jan 22 09:28:25 crc kubenswrapper[4892]: I0122 09:28:25.756183 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d","Type":"ContainerDied","Data":"1ea9d7ef1c10405966222d9c80d9b6c9b872ab221d4b312146dd0455fb6b9c84"} Jan 22 09:28:25 crc kubenswrapper[4892]: I0122 09:28:25.757733 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ac9e125-77fe-4415-b124-bdd6816b313d","Type":"ContainerStarted","Data":"1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f"} Jan 22 09:28:25 crc kubenswrapper[4892]: I0122 09:28:25.789401 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.7893805369999995 podStartE2EDuration="7.789380537s" podCreationTimestamp="2026-01-22 09:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:25.779982661 +0000 UTC m=+1075.624061714" watchObservedRunningTime="2026-01-22 09:28:25.789380537 +0000 UTC m=+1075.633459610" Jan 22 09:28:27 crc kubenswrapper[4892]: I0122 09:28:27.339395 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:28:29 crc kubenswrapper[4892]: I0122 09:28:29.111599 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:29 crc kubenswrapper[4892]: I0122 09:28:29.111943 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:29 crc kubenswrapper[4892]: I0122 09:28:29.146319 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:29 crc kubenswrapper[4892]: I0122 09:28:29.156145 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:29 crc kubenswrapper[4892]: I0122 09:28:29.789931 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:29 crc kubenswrapper[4892]: I0122 09:28:29.790271 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.297196 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.455785 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.455872 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-public-tls-certs\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.455926 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-scripts\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.455957 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-logs\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.456041 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-combined-ca-bundle\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.456069 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-config-data\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.456115 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fqlt\" (UniqueName: \"kubernetes.io/projected/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-kube-api-access-9fqlt\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.456147 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-httpd-run\") pod \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\" (UID: \"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d\") " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.456309 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-logs" (OuterVolumeSpecName: "logs") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.456530 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.457039 4892 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.457089 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.464026 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-scripts" (OuterVolumeSpecName: "scripts") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.477244 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-kube-api-access-9fqlt" (OuterVolumeSpecName: "kube-api-access-9fqlt") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "kube-api-access-9fqlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.486397 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.525423 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.541315 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.543397 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-config-data" (OuterVolumeSpecName: "config-data") pod "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" (UID: "fd4a1ded-1ebf-4e2c-ac57-572f16759b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.558670 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.558707 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fqlt\" (UniqueName: \"kubernetes.io/projected/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-kube-api-access-9fqlt\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.558733 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.558745 4892 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.558757 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.558768 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.578627 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.660881 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.800505 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd4a1ded-1ebf-4e2c-ac57-572f16759b0d","Type":"ContainerDied","Data":"e525d58f9abc9d667bd5706f420c4ddfa0c9513b8b05eeea2b8aa922eff8337f"} Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.800559 4892 scope.go:117] "RemoveContainer" containerID="1ea9d7ef1c10405966222d9c80d9b6c9b872ab221d4b312146dd0455fb6b9c84" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.800570 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.829589 4892 scope.go:117] "RemoveContainer" containerID="358784be3d8422c2079018a1883275f30509a59e462e9b9cb55f9f8bbc6d3d88" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.839532 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.864441 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.894379 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:30 crc kubenswrapper[4892]: E0122 09:28:30.894858 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-log" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.894875 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-log" Jan 22 09:28:30 crc kubenswrapper[4892]: E0122 09:28:30.894885 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-httpd" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.894891 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-httpd" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.895107 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-httpd" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.895175 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" containerName="glance-log" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.896332 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.901759 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.901904 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.915469 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968172 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968217 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968250 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-logs\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968272 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktjv\" (UniqueName: \"kubernetes.io/projected/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-kube-api-access-tktjv\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968381 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968401 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968431 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:30 crc kubenswrapper[4892]: I0122 09:28:30.968463 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.069841 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.069903 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.069954 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070001 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070049 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070071 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070104 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-logs\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070129 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktjv\" (UniqueName: \"kubernetes.io/projected/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-kube-api-access-tktjv\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070675 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070709 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.070722 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-logs\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.076135 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.077152 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.077911 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.078370 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.096887 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktjv\" (UniqueName: \"kubernetes.io/projected/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-kube-api-access-tktjv\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.114707 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.227956 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.440760 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4a1ded-1ebf-4e2c-ac57-572f16759b0d" path="/var/lib/kubelet/pods/fd4a1ded-1ebf-4e2c-ac57-572f16759b0d/volumes" Jan 22 09:28:31 crc kubenswrapper[4892]: I0122 09:28:31.971746 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:28:32 crc kubenswrapper[4892]: I0122 09:28:32.281763 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:32 crc kubenswrapper[4892]: I0122 09:28:32.290543 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 09:28:32 crc kubenswrapper[4892]: I0122 09:28:32.828120 4892 generic.go:334] "Generic (PLEG): container finished" podID="892a47e9-2f83-4902-a210-3b23d56ad662" containerID="5332bf474020455b8e38ff2cd59ac3697b1a724d1e3d2db8fce8bbe04bc8f323" exitCode=0 Jan 22 09:28:32 crc kubenswrapper[4892]: I0122 09:28:32.828177 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lh52x" event={"ID":"892a47e9-2f83-4902-a210-3b23d56ad662","Type":"ContainerDied","Data":"5332bf474020455b8e38ff2cd59ac3697b1a724d1e3d2db8fce8bbe04bc8f323"} Jan 22 09:28:32 crc kubenswrapper[4892]: I0122 09:28:32.832595 4892 generic.go:334] "Generic (PLEG): container finished" podID="4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" containerID="e6bdf9989110bdcbe7493a35733771820dfed07321d77d9cce02845921687cf7" exitCode=0 Jan 22 09:28:32 crc kubenswrapper[4892]: I0122 09:28:32.833733 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5zrrd" event={"ID":"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4","Type":"ContainerDied","Data":"e6bdf9989110bdcbe7493a35733771820dfed07321d77d9cce02845921687cf7"} Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.214692 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.316318 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bd8749ddb-x9h4l" podUID="a434b179-017a-4112-a673-1859114a62ed" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.765515 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.783352 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5zrrd" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.851483 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5zrrd" event={"ID":"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4","Type":"ContainerDied","Data":"c50c67fcf3be20647810d6fd0e4fdb843aaf8cfb66259a8ca71f9064ce261482"} Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.851520 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50c67fcf3be20647810d6fd0e4fdb843aaf8cfb66259a8ca71f9064ce261482" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.851566 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5zrrd" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853235 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-config-data\") pod \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853258 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-scripts\") pod \"892a47e9-2f83-4902-a210-3b23d56ad662\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853327 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-logs\") pod \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853350 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmntc\" (UniqueName: \"kubernetes.io/projected/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-kube-api-access-wmntc\") pod \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853392 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n69g8\" (UniqueName: \"kubernetes.io/projected/892a47e9-2f83-4902-a210-3b23d56ad662-kube-api-access-n69g8\") pod \"892a47e9-2f83-4902-a210-3b23d56ad662\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853430 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-combined-ca-bundle\") pod \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853447 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-credential-keys\") pod \"892a47e9-2f83-4902-a210-3b23d56ad662\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853498 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-fernet-keys\") pod \"892a47e9-2f83-4902-a210-3b23d56ad662\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853527 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-scripts\") pod \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\" (UID: \"4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853570 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-combined-ca-bundle\") pod \"892a47e9-2f83-4902-a210-3b23d56ad662\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.853593 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-config-data\") pod \"892a47e9-2f83-4902-a210-3b23d56ad662\" (UID: \"892a47e9-2f83-4902-a210-3b23d56ad662\") " Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.854842 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-logs" (OuterVolumeSpecName: "logs") pod "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" (UID: "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.857359 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce40113d-7ce5-4cff-b5e4-6d84102a6af6","Type":"ContainerStarted","Data":"0309c9e5797bef6b95216335d0f92d2f6d5119762352354572637ca446a799d6"} Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.861412 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "892a47e9-2f83-4902-a210-3b23d56ad662" (UID: "892a47e9-2f83-4902-a210-3b23d56ad662"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.863026 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-scripts" (OuterVolumeSpecName: "scripts") pod "892a47e9-2f83-4902-a210-3b23d56ad662" (UID: "892a47e9-2f83-4902-a210-3b23d56ad662"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.863109 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892a47e9-2f83-4902-a210-3b23d56ad662-kube-api-access-n69g8" (OuterVolumeSpecName: "kube-api-access-n69g8") pod "892a47e9-2f83-4902-a210-3b23d56ad662" (UID: "892a47e9-2f83-4902-a210-3b23d56ad662"). InnerVolumeSpecName "kube-api-access-n69g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.863452 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lh52x" event={"ID":"892a47e9-2f83-4902-a210-3b23d56ad662","Type":"ContainerDied","Data":"33ebc919dbb0f3b312684d5838d6f106af9eabdb5a35b987bf8949b3fc8a875e"} Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.863484 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ebc919dbb0f3b312684d5838d6f106af9eabdb5a35b987bf8949b3fc8a875e" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.863539 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lh52x" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.865718 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "892a47e9-2f83-4902-a210-3b23d56ad662" (UID: "892a47e9-2f83-4902-a210-3b23d56ad662"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.866046 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-scripts" (OuterVolumeSpecName: "scripts") pod "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" (UID: "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.893478 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-kube-api-access-wmntc" (OuterVolumeSpecName: "kube-api-access-wmntc") pod "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" (UID: "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4"). InnerVolumeSpecName "kube-api-access-wmntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.916124 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" (UID: "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.916789 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-config-data" (OuterVolumeSpecName: "config-data") pod "892a47e9-2f83-4902-a210-3b23d56ad662" (UID: "892a47e9-2f83-4902-a210-3b23d56ad662"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.921156 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-config-data" (OuterVolumeSpecName: "config-data") pod "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" (UID: "4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.942459 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "892a47e9-2f83-4902-a210-3b23d56ad662" (UID: "892a47e9-2f83-4902-a210-3b23d56ad662"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.956547 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7647f5f4ff-hmkw9"] Jan 22 09:28:34 crc kubenswrapper[4892]: E0122 09:28:34.956897 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" containerName="placement-db-sync" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.956913 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" containerName="placement-db-sync" Jan 22 09:28:34 crc kubenswrapper[4892]: E0122 09:28:34.956923 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892a47e9-2f83-4902-a210-3b23d56ad662" containerName="keystone-bootstrap" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.956930 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="892a47e9-2f83-4902-a210-3b23d56ad662" containerName="keystone-bootstrap" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.957100 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" containerName="placement-db-sync" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.957133 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="892a47e9-2f83-4902-a210-3b23d56ad662" containerName="keystone-bootstrap" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.957666 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959537 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959556 4892 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959564 4892 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959573 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959581 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959589 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959597 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959606 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892a47e9-2f83-4902-a210-3b23d56ad662-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959614 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959621 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmntc\" (UniqueName: \"kubernetes.io/projected/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4-kube-api-access-wmntc\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.959629 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n69g8\" (UniqueName: \"kubernetes.io/projected/892a47e9-2f83-4902-a210-3b23d56ad662-kube-api-access-n69g8\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.962772 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.968685 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7647f5f4ff-hmkw9"] Jan 22 09:28:34 crc kubenswrapper[4892]: I0122 09:28:34.968987 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062454 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-scripts\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062510 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-credential-keys\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062565 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-combined-ca-bundle\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062589 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-internal-tls-certs\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062613 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-config-data\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062645 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-fernet-keys\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062708 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br8wb\" (UniqueName: \"kubernetes.io/projected/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-kube-api-access-br8wb\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.062729 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-public-tls-certs\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.066947 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8cff6669d-x8cnv"] Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.069180 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.074210 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.077318 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.089126 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8cff6669d-x8cnv"] Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.164073 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-scripts\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.164109 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-credential-keys\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.164141 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-internal-tls-certs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.164161 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-config-data\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.164845 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhgk\" (UniqueName: \"kubernetes.io/projected/fa434e36-332b-401e-99b3-2dcb7d75da94-kube-api-access-wdhgk\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.164892 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-combined-ca-bundle\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165050 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-combined-ca-bundle\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165095 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-internal-tls-certs\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165151 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa434e36-332b-401e-99b3-2dcb7d75da94-logs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165187 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-config-data\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165256 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-fernet-keys\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165354 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-public-tls-certs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165412 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-scripts\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165494 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br8wb\" (UniqueName: \"kubernetes.io/projected/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-kube-api-access-br8wb\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.165544 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-public-tls-certs\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.169039 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-config-data\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.169111 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-scripts\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.169701 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-combined-ca-bundle\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.171424 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-public-tls-certs\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.172016 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-fernet-keys\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.172690 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-internal-tls-certs\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.173969 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-credential-keys\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.198621 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br8wb\" (UniqueName: \"kubernetes.io/projected/6ab62f99-9658-4ad6-be05-4f0849b6d6d5-kube-api-access-br8wb\") pod \"keystone-7647f5f4ff-hmkw9\" (UID: \"6ab62f99-9658-4ad6-be05-4f0849b6d6d5\") " pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.266794 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhgk\" (UniqueName: \"kubernetes.io/projected/fa434e36-332b-401e-99b3-2dcb7d75da94-kube-api-access-wdhgk\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.266868 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-combined-ca-bundle\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.266899 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa434e36-332b-401e-99b3-2dcb7d75da94-logs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.267002 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-public-tls-certs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.267040 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-scripts\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.267129 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-internal-tls-certs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.267156 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-config-data\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.268944 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa434e36-332b-401e-99b3-2dcb7d75da94-logs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.276156 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-combined-ca-bundle\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.276818 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-scripts\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.277354 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-public-tls-certs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.280094 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-config-data\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.282779 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa434e36-332b-401e-99b3-2dcb7d75da94-internal-tls-certs\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.283235 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.289391 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhgk\" (UniqueName: \"kubernetes.io/projected/fa434e36-332b-401e-99b3-2dcb7d75da94-kube-api-access-wdhgk\") pod \"placement-8cff6669d-x8cnv\" (UID: \"fa434e36-332b-401e-99b3-2dcb7d75da94\") " pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.401760 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.839761 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7647f5f4ff-hmkw9"] Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.904813 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerStarted","Data":"2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47"} Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.920622 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7647f5f4ff-hmkw9" event={"ID":"6ab62f99-9658-4ad6-be05-4f0849b6d6d5","Type":"ContainerStarted","Data":"906c855a6de4f7ec6feed5af1049fd0ee103e614f5014f06a529dfd8e51e602d"} Jan 22 09:28:35 crc kubenswrapper[4892]: I0122 09:28:35.938146 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce40113d-7ce5-4cff-b5e4-6d84102a6af6","Type":"ContainerStarted","Data":"c0f099de934d3e1136d9e241dc36c5aca540c063e025dfd9576ea837ebfdedea"} Jan 22 09:28:36 crc kubenswrapper[4892]: I0122 09:28:36.052888 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8cff6669d-x8cnv"] Jan 22 09:28:36 crc kubenswrapper[4892]: I0122 09:28:36.969131 4892 generic.go:334] "Generic (PLEG): container finished" podID="06ba4135-00fc-4891-bad5-e2e666eabd91" containerID="bffd5591c6aea395c0c24ad45e178f44c9380bc6476f6ab41c1de9f5edbfabe7" exitCode=0 Jan 22 09:28:36 crc kubenswrapper[4892]: I0122 09:28:36.969189 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zvl8" event={"ID":"06ba4135-00fc-4891-bad5-e2e666eabd91","Type":"ContainerDied","Data":"bffd5591c6aea395c0c24ad45e178f44c9380bc6476f6ab41c1de9f5edbfabe7"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.001470 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cff6669d-x8cnv" event={"ID":"fa434e36-332b-401e-99b3-2dcb7d75da94","Type":"ContainerStarted","Data":"7b63a356f22ca09a3385d7924f671cad8fc7d71111ad4c9c5469acedeed148eb"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.001514 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cff6669d-x8cnv" event={"ID":"fa434e36-332b-401e-99b3-2dcb7d75da94","Type":"ContainerStarted","Data":"ebbfaa29bd9952e80579c13f99c49fb6c4c106fa108e87c8e3112258e977d2b8"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.001524 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cff6669d-x8cnv" event={"ID":"fa434e36-332b-401e-99b3-2dcb7d75da94","Type":"ContainerStarted","Data":"e2c0c14ac4ffb590dffe1c64e35a2f0ca62490342241ad92b91aeecac6e0ff16"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.002358 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.002456 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.013617 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7647f5f4ff-hmkw9" event={"ID":"6ab62f99-9658-4ad6-be05-4f0849b6d6d5","Type":"ContainerStarted","Data":"e45313b1fe8f8e338e67c197a841a73c35e333c31ffa48a9e67f5989955be2c8"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.013811 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.019557 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lnmlq" event={"ID":"7fa10241-be09-4db5-894b-845654f34a21","Type":"ContainerStarted","Data":"d3ffe4c077a923df65a6d90f20f7710d12ac358b6c20db9d26af90c6bca7eb85"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.029215 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gn6t" event={"ID":"8998452c-d0f3-42a2-8741-c70ffe854fda","Type":"ContainerStarted","Data":"5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.031608 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8cff6669d-x8cnv" podStartSLOduration=2.031587131 podStartE2EDuration="2.031587131s" podCreationTimestamp="2026-01-22 09:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:37.018156688 +0000 UTC m=+1086.862235751" watchObservedRunningTime="2026-01-22 09:28:37.031587131 +0000 UTC m=+1086.875666194" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.033447 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce40113d-7ce5-4cff-b5e4-6d84102a6af6","Type":"ContainerStarted","Data":"a3ac5d889501d793e854a738c29613cf750642c47f17f3b68816dd1170ca9657"} Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.043330 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7647f5f4ff-hmkw9" podStartSLOduration=3.043306321 podStartE2EDuration="3.043306321s" podCreationTimestamp="2026-01-22 09:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:37.04073033 +0000 UTC m=+1086.884809393" watchObservedRunningTime="2026-01-22 09:28:37.043306321 +0000 UTC m=+1086.887385384" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.067119 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lnmlq" podStartSLOduration=4.188540695 podStartE2EDuration="43.067100022s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="2026-01-22 09:27:56.524897545 +0000 UTC m=+1046.368976598" lastFinishedPulling="2026-01-22 09:28:35.403456862 +0000 UTC m=+1085.247535925" observedRunningTime="2026-01-22 09:28:37.059195032 +0000 UTC m=+1086.903274095" watchObservedRunningTime="2026-01-22 09:28:37.067100022 +0000 UTC m=+1086.911179085" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.083797 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7gn6t" podStartSLOduration=4.116449106 podStartE2EDuration="43.083779042s" podCreationTimestamp="2026-01-22 09:27:54 +0000 UTC" firstStartedPulling="2026-01-22 09:27:56.51094151 +0000 UTC m=+1046.355020583" lastFinishedPulling="2026-01-22 09:28:35.478271456 +0000 UTC m=+1085.322350519" observedRunningTime="2026-01-22 09:28:37.078780022 +0000 UTC m=+1086.922859075" watchObservedRunningTime="2026-01-22 09:28:37.083779042 +0000 UTC m=+1086.927858105" Jan 22 09:28:37 crc kubenswrapper[4892]: I0122 09:28:37.105310 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.105278527 podStartE2EDuration="7.105278527s" podCreationTimestamp="2026-01-22 09:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:37.095304858 +0000 UTC m=+1086.939383921" watchObservedRunningTime="2026-01-22 09:28:37.105278527 +0000 UTC m=+1086.949357590" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.408622 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.439766 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xprmj\" (UniqueName: \"kubernetes.io/projected/06ba4135-00fc-4891-bad5-e2e666eabd91-kube-api-access-xprmj\") pod \"06ba4135-00fc-4891-bad5-e2e666eabd91\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.439829 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-config\") pod \"06ba4135-00fc-4891-bad5-e2e666eabd91\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.439855 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-combined-ca-bundle\") pod \"06ba4135-00fc-4891-bad5-e2e666eabd91\" (UID: \"06ba4135-00fc-4891-bad5-e2e666eabd91\") " Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.448537 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ba4135-00fc-4891-bad5-e2e666eabd91-kube-api-access-xprmj" (OuterVolumeSpecName: "kube-api-access-xprmj") pod "06ba4135-00fc-4891-bad5-e2e666eabd91" (UID: "06ba4135-00fc-4891-bad5-e2e666eabd91"). InnerVolumeSpecName "kube-api-access-xprmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.484028 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ba4135-00fc-4891-bad5-e2e666eabd91" (UID: "06ba4135-00fc-4891-bad5-e2e666eabd91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.544861 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xprmj\" (UniqueName: \"kubernetes.io/projected/06ba4135-00fc-4891-bad5-e2e666eabd91-kube-api-access-xprmj\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.544898 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.562019 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-config" (OuterVolumeSpecName: "config") pod "06ba4135-00fc-4891-bad5-e2e666eabd91" (UID: "06ba4135-00fc-4891-bad5-e2e666eabd91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:38 crc kubenswrapper[4892]: I0122 09:28:38.645827 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ba4135-00fc-4891-bad5-e2e666eabd91-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.067379 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zvl8" event={"ID":"06ba4135-00fc-4891-bad5-e2e666eabd91","Type":"ContainerDied","Data":"b14661a51e2b4a1757eefb97cce586dc1288f184fa8815866dfd89e58416f7c2"} Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.067428 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b14661a51e2b4a1757eefb97cce586dc1288f184fa8815866dfd89e58416f7c2" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.067465 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zvl8" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.176195 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-2c5ww"] Jan 22 09:28:39 crc kubenswrapper[4892]: E0122 09:28:39.176625 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ba4135-00fc-4891-bad5-e2e666eabd91" containerName="neutron-db-sync" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.176643 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ba4135-00fc-4891-bad5-e2e666eabd91" containerName="neutron-db-sync" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.176817 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ba4135-00fc-4891-bad5-e2e666eabd91" containerName="neutron-db-sync" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.177734 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.193767 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-2c5ww"] Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.265354 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54fd5b85c6-qxq5r"] Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.269564 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.271209 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.271263 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-config\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.271280 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.271312 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbx4\" (UniqueName: \"kubernetes.io/projected/db3bd5d3-0b01-47ff-8410-95fd1e351060-kube-api-access-zxbx4\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.271674 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.271725 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.275462 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.275730 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rj7lp" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.275914 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.276055 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.289244 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54fd5b85c6-qxq5r"] Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375175 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-httpd-config\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375223 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375249 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375275 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375332 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-config\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375349 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375368 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxbx4\" (UniqueName: \"kubernetes.io/projected/db3bd5d3-0b01-47ff-8410-95fd1e351060-kube-api-access-zxbx4\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375414 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-combined-ca-bundle\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375442 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-ovndb-tls-certs\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375467 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-config\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.375513 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4gp\" (UniqueName: \"kubernetes.io/projected/f3109c9e-e308-4022-8b5e-4b8b61319c24-kube-api-access-qv4gp\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.376475 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.376495 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.380744 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.383013 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-config\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.383089 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.400469 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxbx4\" (UniqueName: \"kubernetes.io/projected/db3bd5d3-0b01-47ff-8410-95fd1e351060-kube-api-access-zxbx4\") pod \"dnsmasq-dns-6b9c8b59c-2c5ww\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.477434 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-combined-ca-bundle\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.477484 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-ovndb-tls-certs\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.477511 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-config\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.477557 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv4gp\" (UniqueName: \"kubernetes.io/projected/f3109c9e-e308-4022-8b5e-4b8b61319c24-kube-api-access-qv4gp\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.477584 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-httpd-config\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.483789 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-combined-ca-bundle\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.491913 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-config\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.500022 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-httpd-config\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.500720 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-ovndb-tls-certs\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.504275 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv4gp\" (UniqueName: \"kubernetes.io/projected/f3109c9e-e308-4022-8b5e-4b8b61319c24-kube-api-access-qv4gp\") pod \"neutron-54fd5b85c6-qxq5r\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.508780 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:39 crc kubenswrapper[4892]: I0122 09:28:39.604798 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:40 crc kubenswrapper[4892]: I0122 09:28:40.098094 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-2c5ww"] Jan 22 09:28:40 crc kubenswrapper[4892]: I0122 09:28:40.194326 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54fd5b85c6-qxq5r"] Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.104637 4892 generic.go:334] "Generic (PLEG): container finished" podID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerID="a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219" exitCode=0 Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.104719 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" event={"ID":"db3bd5d3-0b01-47ff-8410-95fd1e351060","Type":"ContainerDied","Data":"a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219"} Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.109889 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" event={"ID":"db3bd5d3-0b01-47ff-8410-95fd1e351060","Type":"ContainerStarted","Data":"21dab7644292125fb7de4adcc4b75e48c59db69be5a05561ca367219a9021428"} Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.121545 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fd5b85c6-qxq5r" event={"ID":"f3109c9e-e308-4022-8b5e-4b8b61319c24","Type":"ContainerStarted","Data":"38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0"} Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.121629 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fd5b85c6-qxq5r" event={"ID":"f3109c9e-e308-4022-8b5e-4b8b61319c24","Type":"ContainerStarted","Data":"2f37ec37b00a58ded59f8bb517b3c2e345029424815d59a14e4bd62a544d086f"} Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.141350 4892 generic.go:334] "Generic (PLEG): container finished" podID="7fa10241-be09-4db5-894b-845654f34a21" containerID="d3ffe4c077a923df65a6d90f20f7710d12ac358b6c20db9d26af90c6bca7eb85" exitCode=0 Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.141389 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lnmlq" event={"ID":"7fa10241-be09-4db5-894b-845654f34a21","Type":"ContainerDied","Data":"d3ffe4c077a923df65a6d90f20f7710d12ac358b6c20db9d26af90c6bca7eb85"} Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.229129 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.229172 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.263067 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.293359 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.654110 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b7dcc6b6f-vkw7t"] Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.656389 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.658445 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.662989 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.673039 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b7dcc6b6f-vkw7t"] Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821243 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-internal-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821402 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-ovndb-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821446 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-public-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821572 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-config\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821635 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-combined-ca-bundle\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821774 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-httpd-config\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.821838 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69m8c\" (UniqueName: \"kubernetes.io/projected/6d80b524-788b-4fdf-b8bf-28ae522512e1-kube-api-access-69m8c\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923295 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-httpd-config\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923377 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69m8c\" (UniqueName: \"kubernetes.io/projected/6d80b524-788b-4fdf-b8bf-28ae522512e1-kube-api-access-69m8c\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923452 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-internal-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923535 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-ovndb-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923571 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-public-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923692 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-config\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.923741 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-combined-ca-bundle\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.931683 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-public-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.933903 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-ovndb-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.935906 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-httpd-config\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.936621 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-config\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.945962 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69m8c\" (UniqueName: \"kubernetes.io/projected/6d80b524-788b-4fdf-b8bf-28ae522512e1-kube-api-access-69m8c\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.949618 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-internal-tls-certs\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:41 crc kubenswrapper[4892]: I0122 09:28:41.953191 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80b524-788b-4fdf-b8bf-28ae522512e1-combined-ca-bundle\") pod \"neutron-5b7dcc6b6f-vkw7t\" (UID: \"6d80b524-788b-4fdf-b8bf-28ae522512e1\") " pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:42 crc kubenswrapper[4892]: I0122 09:28:42.049694 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:42 crc kubenswrapper[4892]: I0122 09:28:42.148658 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 09:28:42 crc kubenswrapper[4892]: I0122 09:28:42.148923 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.170738 4892 generic.go:334] "Generic (PLEG): container finished" podID="8998452c-d0f3-42a2-8741-c70ffe854fda" containerID="5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1" exitCode=0 Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.171229 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.171239 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.170883 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gn6t" event={"ID":"8998452c-d0f3-42a2-8741-c70ffe854fda","Type":"ContainerDied","Data":"5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1"} Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.190681 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.306575 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bd8749ddb-x9h4l" podUID="a434b179-017a-4112-a673-1859114a62ed" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.543698 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 09:28:44 crc kubenswrapper[4892]: I0122 09:28:44.554086 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.668877 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.676194 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.729226 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-db-sync-config-data\") pod \"7fa10241-be09-4db5-894b-845654f34a21\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.729752 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8jp6\" (UniqueName: \"kubernetes.io/projected/7fa10241-be09-4db5-894b-845654f34a21-kube-api-access-n8jp6\") pod \"7fa10241-be09-4db5-894b-845654f34a21\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.729781 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-scripts\") pod \"8998452c-d0f3-42a2-8741-c70ffe854fda\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.729925 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-combined-ca-bundle\") pod \"7fa10241-be09-4db5-894b-845654f34a21\" (UID: \"7fa10241-be09-4db5-894b-845654f34a21\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.729963 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-config-data\") pod \"8998452c-d0f3-42a2-8741-c70ffe854fda\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.729985 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-combined-ca-bundle\") pod \"8998452c-d0f3-42a2-8741-c70ffe854fda\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.730036 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f8xr\" (UniqueName: \"kubernetes.io/projected/8998452c-d0f3-42a2-8741-c70ffe854fda-kube-api-access-8f8xr\") pod \"8998452c-d0f3-42a2-8741-c70ffe854fda\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.730066 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-db-sync-config-data\") pod \"8998452c-d0f3-42a2-8741-c70ffe854fda\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.730099 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8998452c-d0f3-42a2-8741-c70ffe854fda-etc-machine-id\") pod \"8998452c-d0f3-42a2-8741-c70ffe854fda\" (UID: \"8998452c-d0f3-42a2-8741-c70ffe854fda\") " Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.730530 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8998452c-d0f3-42a2-8741-c70ffe854fda-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8998452c-d0f3-42a2-8741-c70ffe854fda" (UID: "8998452c-d0f3-42a2-8741-c70ffe854fda"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.736807 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8998452c-d0f3-42a2-8741-c70ffe854fda-kube-api-access-8f8xr" (OuterVolumeSpecName: "kube-api-access-8f8xr") pod "8998452c-d0f3-42a2-8741-c70ffe854fda" (UID: "8998452c-d0f3-42a2-8741-c70ffe854fda"). InnerVolumeSpecName "kube-api-access-8f8xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.736917 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-scripts" (OuterVolumeSpecName: "scripts") pod "8998452c-d0f3-42a2-8741-c70ffe854fda" (UID: "8998452c-d0f3-42a2-8741-c70ffe854fda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.737402 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8998452c-d0f3-42a2-8741-c70ffe854fda" (UID: "8998452c-d0f3-42a2-8741-c70ffe854fda"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.737513 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa10241-be09-4db5-894b-845654f34a21-kube-api-access-n8jp6" (OuterVolumeSpecName: "kube-api-access-n8jp6") pod "7fa10241-be09-4db5-894b-845654f34a21" (UID: "7fa10241-be09-4db5-894b-845654f34a21"). InnerVolumeSpecName "kube-api-access-n8jp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.758033 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7fa10241-be09-4db5-894b-845654f34a21" (UID: "7fa10241-be09-4db5-894b-845654f34a21"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.762839 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8998452c-d0f3-42a2-8741-c70ffe854fda" (UID: "8998452c-d0f3-42a2-8741-c70ffe854fda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.771277 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fa10241-be09-4db5-894b-845654f34a21" (UID: "7fa10241-be09-4db5-894b-845654f34a21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.786101 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-config-data" (OuterVolumeSpecName: "config-data") pod "8998452c-d0f3-42a2-8741-c70ffe854fda" (UID: "8998452c-d0f3-42a2-8741-c70ffe854fda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832358 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832390 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832399 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832407 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f8xr\" (UniqueName: \"kubernetes.io/projected/8998452c-d0f3-42a2-8741-c70ffe854fda-kube-api-access-8f8xr\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832417 4892 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832424 4892 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8998452c-d0f3-42a2-8741-c70ffe854fda-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832432 4892 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fa10241-be09-4db5-894b-845654f34a21-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832440 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8jp6\" (UniqueName: \"kubernetes.io/projected/7fa10241-be09-4db5-894b-845654f34a21-kube-api-access-n8jp6\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:46 crc kubenswrapper[4892]: I0122 09:28:46.832447 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8998452c-d0f3-42a2-8741-c70ffe854fda-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.199643 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lnmlq" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.199663 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lnmlq" event={"ID":"7fa10241-be09-4db5-894b-845654f34a21","Type":"ContainerDied","Data":"4d11fba568a9b4a1b840e17981766f134e3f4b353fbca67e6ec50cbbbbe146dc"} Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.199983 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d11fba568a9b4a1b840e17981766f134e3f4b353fbca67e6ec50cbbbbe146dc" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.203601 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7gn6t" event={"ID":"8998452c-d0f3-42a2-8741-c70ffe854fda","Type":"ContainerDied","Data":"0005ce942026eabc6d92729f78d41a6a0b8826c9e10a49dca4032d66ad43446a"} Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.203646 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0005ce942026eabc6d92729f78d41a6a0b8826c9e10a49dca4032d66ad43446a" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.203693 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7gn6t" Jan 22 09:28:47 crc kubenswrapper[4892]: E0122 09:28:47.419152 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.491061 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b7dcc6b6f-vkw7t"] Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.897444 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7c7f45b4bf-xx9p2"] Jan 22 09:28:47 crc kubenswrapper[4892]: E0122 09:28:47.898185 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa10241-be09-4db5-894b-845654f34a21" containerName="barbican-db-sync" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.898263 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa10241-be09-4db5-894b-845654f34a21" containerName="barbican-db-sync" Jan 22 09:28:47 crc kubenswrapper[4892]: E0122 09:28:47.898318 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8998452c-d0f3-42a2-8741-c70ffe854fda" containerName="cinder-db-sync" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.898329 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8998452c-d0f3-42a2-8741-c70ffe854fda" containerName="cinder-db-sync" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.898524 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="8998452c-d0f3-42a2-8741-c70ffe854fda" containerName="cinder-db-sync" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.898561 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa10241-be09-4db5-894b-845654f34a21" containerName="barbican-db-sync" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.900093 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.908154 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.908323 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6sstj" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.908657 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.921619 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c7f45b4bf-xx9p2"] Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.939373 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cbdbc497d-dqskw"] Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.947575 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:47 crc kubenswrapper[4892]: I0122 09:28:47.955816 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063387 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-config-data-custom\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063454 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-config-data\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063492 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-combined-ca-bundle\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063574 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8t8x\" (UniqueName: \"kubernetes.io/projected/1a6b0877-2c23-4ebd-a433-620571e4c0bf-kube-api-access-r8t8x\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063618 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-config-data-custom\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063646 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-config-data\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063696 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdvd\" (UniqueName: \"kubernetes.io/projected/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-kube-api-access-hqdvd\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063723 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a6b0877-2c23-4ebd-a433-620571e4c0bf-logs\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063747 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-logs\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.063783 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-combined-ca-bundle\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.086631 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.100021 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.113426 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.113515 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.124007 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.124202 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5p5cg" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.136388 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.165490 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-combined-ca-bundle\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.165777 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.165863 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjxtr\" (UniqueName: \"kubernetes.io/projected/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-kube-api-access-pjxtr\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.165939 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-config-data-custom\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.166035 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-config-data\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.166140 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-combined-ca-bundle\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.166239 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167000 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167135 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167243 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8t8x\" (UniqueName: \"kubernetes.io/projected/1a6b0877-2c23-4ebd-a433-620571e4c0bf-kube-api-access-r8t8x\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167422 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-config-data-custom\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167528 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-config-data\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167678 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.167783 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdvd\" (UniqueName: \"kubernetes.io/projected/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-kube-api-access-hqdvd\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.179806 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a6b0877-2c23-4ebd-a433-620571e4c0bf-logs\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.180095 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-logs\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.180605 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-logs\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.177344 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cbdbc497d-dqskw"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.181020 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a6b0877-2c23-4ebd-a433-620571e4c0bf-logs\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.179748 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-config-data\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.193080 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-combined-ca-bundle\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.193680 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-config-data-custom\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.196400 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-combined-ca-bundle\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.204408 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-config-data-custom\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.223862 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8t8x\" (UniqueName: \"kubernetes.io/projected/1a6b0877-2c23-4ebd-a433-620571e4c0bf-kube-api-access-r8t8x\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.257728 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6b0877-2c23-4ebd-a433-620571e4c0bf-config-data\") pod \"barbican-worker-7c7f45b4bf-xx9p2\" (UID: \"1a6b0877-2c23-4ebd-a433-620571e4c0bf\") " pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.296613 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.296725 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjxtr\" (UniqueName: \"kubernetes.io/projected/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-kube-api-access-pjxtr\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.296836 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.296858 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.296918 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.296988 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.297346 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.304818 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdvd\" (UniqueName: \"kubernetes.io/projected/4d7e7ea0-d123-41ab-bd59-0f6da52316bd-kube-api-access-hqdvd\") pod \"barbican-keystone-listener-6cbdbc497d-dqskw\" (UID: \"4d7e7ea0-d123-41ab-bd59-0f6da52316bd\") " pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.356842 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.422022 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.422067 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.368832 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.423415 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.428063 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjxtr\" (UniqueName: \"kubernetes.io/projected/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-kube-api-access-pjxtr\") pod \"cinder-scheduler-0\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.437795 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-2c5ww"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.438444 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.440439 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fd5b85c6-qxq5r" event={"ID":"f3109c9e-e308-4022-8b5e-4b8b61319c24","Type":"ContainerStarted","Data":"867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243"} Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.441263 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.449978 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-fqr94"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.451554 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.459975 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-fqr94"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.474793 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" event={"ID":"db3bd5d3-0b01-47ff-8410-95fd1e351060","Type":"ContainerStarted","Data":"42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87"} Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.475343 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.476886 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7fbbbbb748-pbjzb"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.478434 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.485445 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.491757 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fbbbbb748-pbjzb"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.494884 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b7dcc6b6f-vkw7t" event={"ID":"6d80b524-788b-4fdf-b8bf-28ae522512e1","Type":"ContainerStarted","Data":"e4f5ac7e064101df586b790e7225057c24f9fcaba9515c855a45d815c6b9bf2d"} Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.494918 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b7dcc6b6f-vkw7t" event={"ID":"6d80b524-788b-4fdf-b8bf-28ae522512e1","Type":"ContainerStarted","Data":"076d1453e911a8f62a67138907c7727c785f520b7736d82393b68965a3842f73"} Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.507472 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerStarted","Data":"a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1"} Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.507635 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="ceilometer-notification-agent" containerID="cri-o://d6b4ec33cb48d062ea41f639919a5b3e17d94fe4ffe83d9b4bde6b7be77d52cd" gracePeriod=30 Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.507736 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.507780 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="proxy-httpd" containerID="cri-o://a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1" gracePeriod=30 Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.507813 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="sg-core" containerID="cri-o://2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47" gracePeriod=30 Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.522909 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.524248 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.524850 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.527658 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.529006 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-config\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.530470 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.530602 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.530862 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.530964 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tvg\" (UniqueName: \"kubernetes.io/projected/a4609a5c-a8a3-4516-82db-e66273379720-kube-api-access-p9tvg\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.531115 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.553397 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.553742 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54fd5b85c6-qxq5r" podStartSLOduration=9.553718038 podStartE2EDuration="9.553718038s" podCreationTimestamp="2026-01-22 09:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:48.474462338 +0000 UTC m=+1098.318541401" watchObservedRunningTime="2026-01-22 09:28:48.553718038 +0000 UTC m=+1098.397797101" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.576887 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" podStartSLOduration=9.576868864 podStartE2EDuration="9.576868864s" podCreationTimestamp="2026-01-22 09:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:48.529716413 +0000 UTC m=+1098.373795476" watchObservedRunningTime="2026-01-22 09:28:48.576868864 +0000 UTC m=+1098.420947927" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.632663 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633029 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633062 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tvg\" (UniqueName: \"kubernetes.io/projected/a4609a5c-a8a3-4516-82db-e66273379720-kube-api-access-p9tvg\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633156 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633218 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633256 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-scripts\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633279 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data-custom\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633320 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfsp\" (UniqueName: \"kubernetes.io/projected/79d3567b-bda5-4ea2-9e1d-24f617405a38-kube-api-access-tnfsp\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633353 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data-custom\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633380 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5766f7c7-fa97-4fe0-af76-e06667a8079b-logs\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633409 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mxk\" (UniqueName: \"kubernetes.io/projected/5766f7c7-fa97-4fe0-af76-e06667a8079b-kube-api-access-64mxk\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633445 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d3567b-bda5-4ea2-9e1d-24f617405a38-logs\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633469 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-config\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633496 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633527 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633558 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79d3567b-bda5-4ea2-9e1d-24f617405a38-etc-machine-id\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633598 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-combined-ca-bundle\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.633634 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.634447 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.634890 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-config\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.635181 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.635652 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.641171 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.652724 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tvg\" (UniqueName: \"kubernetes.io/projected/a4609a5c-a8a3-4516-82db-e66273379720-kube-api-access-p9tvg\") pod \"dnsmasq-dns-75bfc9b94f-fqr94\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736584 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfsp\" (UniqueName: \"kubernetes.io/projected/79d3567b-bda5-4ea2-9e1d-24f617405a38-kube-api-access-tnfsp\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736630 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data-custom\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736654 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5766f7c7-fa97-4fe0-af76-e06667a8079b-logs\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736676 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mxk\" (UniqueName: \"kubernetes.io/projected/5766f7c7-fa97-4fe0-af76-e06667a8079b-kube-api-access-64mxk\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736704 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d3567b-bda5-4ea2-9e1d-24f617405a38-logs\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736723 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736753 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79d3567b-bda5-4ea2-9e1d-24f617405a38-etc-machine-id\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736774 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-combined-ca-bundle\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736818 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736886 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736919 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-scripts\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.736934 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data-custom\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.738753 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5766f7c7-fa97-4fe0-af76-e06667a8079b-logs\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.739926 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79d3567b-bda5-4ea2-9e1d-24f617405a38-etc-machine-id\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.740712 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d3567b-bda5-4ea2-9e1d-24f617405a38-logs\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.749137 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.751355 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.751478 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-combined-ca-bundle\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.751584 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.752077 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data-custom\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.753737 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-scripts\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.756075 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mxk\" (UniqueName: \"kubernetes.io/projected/5766f7c7-fa97-4fe0-af76-e06667a8079b-kube-api-access-64mxk\") pod \"barbican-api-7fbbbbb748-pbjzb\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.756623 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data-custom\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.789000 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfsp\" (UniqueName: \"kubernetes.io/projected/79d3567b-bda5-4ea2-9e1d-24f617405a38-kube-api-access-tnfsp\") pod \"cinder-api-0\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " pod="openstack/cinder-api-0" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.819732 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.838786 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:48 crc kubenswrapper[4892]: I0122 09:28:48.850598 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.079901 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cbdbc497d-dqskw"] Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.174507 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.197101 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c7f45b4bf-xx9p2"] Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.353211 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-fqr94"] Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.480848 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fbbbbb748-pbjzb"] Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.518221 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" event={"ID":"4d7e7ea0-d123-41ab-bd59-0f6da52316bd","Type":"ContainerStarted","Data":"96f26e2babc92c0a3d6c7e602f20c26c8b06d4549ebcdb4d64396acfb23c9ed3"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.522039 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbbbbb748-pbjzb" event={"ID":"5766f7c7-fa97-4fe0-af76-e06667a8079b","Type":"ContainerStarted","Data":"a43cd0a0f2cf75182e26a41a362d5d1f20621556a03b3b7304304a17a0b5f4da"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.529261 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" event={"ID":"1a6b0877-2c23-4ebd-a433-620571e4c0bf","Type":"ContainerStarted","Data":"97fa577f9f33cd212eb0f40e43226c56d27b40bf2e5723a4cdf943d3529d14d3"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.532116 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" event={"ID":"a4609a5c-a8a3-4516-82db-e66273379720","Type":"ContainerStarted","Data":"8c0476b1a012178464a2103d8f1a4e34b78d4532f74d901dae93e71f7a916a92"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.537776 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b7dcc6b6f-vkw7t" event={"ID":"6d80b524-788b-4fdf-b8bf-28ae522512e1","Type":"ContainerStarted","Data":"b293ed9ad836ab964380e9b45682588071eae112bb3bbac23812194b6ad557af"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.538820 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.556854 4892 generic.go:334] "Generic (PLEG): container finished" podID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerID="a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1" exitCode=0 Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.556885 4892 generic.go:334] "Generic (PLEG): container finished" podID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerID="2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47" exitCode=2 Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.556960 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerDied","Data":"a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.556984 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerDied","Data":"2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.559135 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8","Type":"ContainerStarted","Data":"7b2bb37aa017be70c973f2aa84afab2e79b1f1c91ee33d188644afa2b27ade66"} Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.559279 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerName="dnsmasq-dns" containerID="cri-o://42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87" gracePeriod=10 Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.572757 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b7dcc6b6f-vkw7t" podStartSLOduration=8.572735673 podStartE2EDuration="8.572735673s" podCreationTimestamp="2026-01-22 09:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:49.560659013 +0000 UTC m=+1099.404738076" watchObservedRunningTime="2026-01-22 09:28:49.572735673 +0000 UTC m=+1099.416814746" Jan 22 09:28:49 crc kubenswrapper[4892]: I0122 09:28:49.584639 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.158194 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.533917 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.587747 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-swift-storage-0\") pod \"db3bd5d3-0b01-47ff-8410-95fd1e351060\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.588127 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-sb\") pod \"db3bd5d3-0b01-47ff-8410-95fd1e351060\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.588150 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-svc\") pod \"db3bd5d3-0b01-47ff-8410-95fd1e351060\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.588775 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxbx4\" (UniqueName: \"kubernetes.io/projected/db3bd5d3-0b01-47ff-8410-95fd1e351060-kube-api-access-zxbx4\") pod \"db3bd5d3-0b01-47ff-8410-95fd1e351060\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.588834 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-nb\") pod \"db3bd5d3-0b01-47ff-8410-95fd1e351060\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.588894 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-config\") pod \"db3bd5d3-0b01-47ff-8410-95fd1e351060\" (UID: \"db3bd5d3-0b01-47ff-8410-95fd1e351060\") " Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.618729 4892 generic.go:334] "Generic (PLEG): container finished" podID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerID="42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87" exitCode=0 Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.618771 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.618803 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" event={"ID":"db3bd5d3-0b01-47ff-8410-95fd1e351060","Type":"ContainerDied","Data":"42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87"} Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.618827 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-2c5ww" event={"ID":"db3bd5d3-0b01-47ff-8410-95fd1e351060","Type":"ContainerDied","Data":"21dab7644292125fb7de4adcc4b75e48c59db69be5a05561ca367219a9021428"} Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.618845 4892 scope.go:117] "RemoveContainer" containerID="42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.628566 4892 generic.go:334] "Generic (PLEG): container finished" podID="a4609a5c-a8a3-4516-82db-e66273379720" containerID="b7106e4c2c0db2d87d4e3fd49f9f61d74404cfdb7c70660512ad56fc51b40e77" exitCode=0 Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.628628 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" event={"ID":"a4609a5c-a8a3-4516-82db-e66273379720","Type":"ContainerDied","Data":"b7106e4c2c0db2d87d4e3fd49f9f61d74404cfdb7c70660512ad56fc51b40e77"} Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.630115 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3bd5d3-0b01-47ff-8410-95fd1e351060-kube-api-access-zxbx4" (OuterVolumeSpecName: "kube-api-access-zxbx4") pod "db3bd5d3-0b01-47ff-8410-95fd1e351060" (UID: "db3bd5d3-0b01-47ff-8410-95fd1e351060"). InnerVolumeSpecName "kube-api-access-zxbx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.630298 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79d3567b-bda5-4ea2-9e1d-24f617405a38","Type":"ContainerStarted","Data":"8ce11f6cc97188658c38fc3793ea2f2ab7dc20f90f14e13b4ecf74fb759b68f8"} Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.635204 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbbbbb748-pbjzb" event={"ID":"5766f7c7-fa97-4fe0-af76-e06667a8079b","Type":"ContainerStarted","Data":"38190e579c44ae7b68c5d69d0962ee46006456495ac5d09a2b0fd1e7ff4ed696"} Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.692274 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxbx4\" (UniqueName: \"kubernetes.io/projected/db3bd5d3-0b01-47ff-8410-95fd1e351060-kube-api-access-zxbx4\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.747442 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db3bd5d3-0b01-47ff-8410-95fd1e351060" (UID: "db3bd5d3-0b01-47ff-8410-95fd1e351060"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.746029 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db3bd5d3-0b01-47ff-8410-95fd1e351060" (UID: "db3bd5d3-0b01-47ff-8410-95fd1e351060"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.751656 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-config" (OuterVolumeSpecName: "config") pod "db3bd5d3-0b01-47ff-8410-95fd1e351060" (UID: "db3bd5d3-0b01-47ff-8410-95fd1e351060"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.760935 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db3bd5d3-0b01-47ff-8410-95fd1e351060" (UID: "db3bd5d3-0b01-47ff-8410-95fd1e351060"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.766831 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db3bd5d3-0b01-47ff-8410-95fd1e351060" (UID: "db3bd5d3-0b01-47ff-8410-95fd1e351060"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.794039 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.794077 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.794092 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.794104 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.794116 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3bd5d3-0b01-47ff-8410-95fd1e351060-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.951181 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-2c5ww"] Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.959170 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-2c5ww"] Jan 22 09:28:50 crc kubenswrapper[4892]: I0122 09:28:50.981583 4892 scope.go:117] "RemoveContainer" containerID="a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.429782 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" path="/var/lib/kubelet/pods/db3bd5d3-0b01-47ff-8410-95fd1e351060/volumes" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.495549 4892 scope.go:117] "RemoveContainer" containerID="42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87" Jan 22 09:28:51 crc kubenswrapper[4892]: E0122 09:28:51.507217 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87\": container with ID starting with 42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87 not found: ID does not exist" containerID="42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.507301 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87"} err="failed to get container status \"42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87\": rpc error: code = NotFound desc = could not find container \"42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87\": container with ID starting with 42cf7fc7a3882bd379081b42bcdedfc47048765bcbe24413c7ab619231645e87 not found: ID does not exist" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.507335 4892 scope.go:117] "RemoveContainer" containerID="a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219" Jan 22 09:28:51 crc kubenswrapper[4892]: E0122 09:28:51.507669 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219\": container with ID starting with a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219 not found: ID does not exist" containerID="a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.507697 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219"} err="failed to get container status \"a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219\": rpc error: code = NotFound desc = could not find container \"a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219\": container with ID starting with a2a4ccc6638aca97624c949d457e62481e22b302b156ab36074dd8b85e838219 not found: ID does not exist" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.661114 4892 generic.go:334] "Generic (PLEG): container finished" podID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerID="ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2" exitCode=137 Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.661216 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dc8c68f7-bpxpg" event={"ID":"eca8ea69-b1df-4f64-b894-d1a33fedef9d","Type":"ContainerDied","Data":"ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2"} Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.679197 4892 generic.go:334] "Generic (PLEG): container finished" podID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerID="d6b4ec33cb48d062ea41f639919a5b3e17d94fe4ffe83d9b4bde6b7be77d52cd" exitCode=0 Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.679258 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerDied","Data":"d6b4ec33cb48d062ea41f639919a5b3e17d94fe4ffe83d9b4bde6b7be77d52cd"} Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.682862 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79d3567b-bda5-4ea2-9e1d-24f617405a38","Type":"ContainerStarted","Data":"5ca976d5f619a9875e225b06d8f339f67814d869403c739b6b06b7293b1786db"} Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.697486 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbbbbb748-pbjzb" event={"ID":"5766f7c7-fa97-4fe0-af76-e06667a8079b","Type":"ContainerStarted","Data":"92f4deaf1e97c6c8f682cb1b378f077c2d766b5250783bf688134b8e64703a5d"} Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.697522 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.697551 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:28:51 crc kubenswrapper[4892]: W0122 09:28:51.715173 4892 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8998452c_d0f3_42a2_8741_c70ffe854fda.slice/crio-conmon-5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8998452c_d0f3_42a2_8741_c70ffe854fda.slice/crio-conmon-5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1.scope: no such file or directory Jan 22 09:28:51 crc kubenswrapper[4892]: W0122 09:28:51.717209 4892 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8998452c_d0f3_42a2_8741_c70ffe854fda.slice/crio-5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8998452c_d0f3_42a2_8741_c70ffe854fda.slice/crio-5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1.scope: no such file or directory Jan 22 09:28:51 crc kubenswrapper[4892]: W0122 09:28:51.718998 4892 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3bd5d3_0b01_47ff_8410_95fd1e351060.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3bd5d3_0b01_47ff_8410_95fd1e351060.slice: no such file or directory Jan 22 09:28:51 crc kubenswrapper[4892]: I0122 09:28:51.733021 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7fbbbbb748-pbjzb" podStartSLOduration=3.733000021 podStartE2EDuration="3.733000021s" podCreationTimestamp="2026-01-22 09:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:51.729170529 +0000 UTC m=+1101.573249602" watchObservedRunningTime="2026-01-22 09:28:51.733000021 +0000 UTC m=+1101.577079084" Jan 22 09:28:51 crc kubenswrapper[4892]: W0122 09:28:51.755148 4892 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-conmon-a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-conmon-a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1.scope: no such file or directory Jan 22 09:28:51 crc kubenswrapper[4892]: W0122 09:28:51.755226 4892 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1.scope: no such file or directory Jan 22 09:28:51 crc kubenswrapper[4892]: E0122 09:28:51.972150 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8998452c_d0f3_42a2_8741_c70ffe854fda.slice/crio-0005ce942026eabc6d92729f78d41a6a0b8826c9e10a49dca4032d66ad43446a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ba4135_00fc_4891_bad5_e2e666eabd91.slice/crio-b14661a51e2b4a1757eefb97cce586dc1288f184fa8815866dfd89e58416f7c2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa10241_be09_4db5_894b_845654f34a21.slice/crio-d3ffe4c077a923df65a6d90f20f7710d12ac358b6c20db9d26af90c6bca7eb85.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-conmon-d6b4ec33cb48d062ea41f639919a5b3e17d94fe4ffe83d9b4bde6b7be77d52cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca8ea69_b1df_4f64_b894_d1a33fedef9d.slice/crio-conmon-ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca8ea69_b1df_4f64_b894_d1a33fedef9d.slice/crio-ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02e1363_2043_4097_bda0_012158a0bf56.slice/crio-conmon-4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa10241_be09_4db5_894b_845654f34a21.slice/crio-conmon-d3ffe4c077a923df65a6d90f20f7710d12ac358b6c20db9d26af90c6bca7eb85.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca8ea69_b1df_4f64_b894_d1a33fedef9d.slice/crio-f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa10241_be09_4db5_894b_845654f34a21.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02e1363_2043_4097_bda0_012158a0bf56.slice/crio-conmon-fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa10241_be09_4db5_894b_845654f34a21.slice/crio-4d11fba568a9b4a1b840e17981766f134e3f4b353fbca67e6ec50cbbbbe146dc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8998452c_d0f3_42a2_8741_c70ffe854fda.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ba4135_00fc_4891_bad5_e2e666eabd91.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1b8dc7_a488_447d_8c4a_21119f3e3dd4.slice/crio-conmon-2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02e1363_2043_4097_bda0_012158a0bf56.slice/crio-4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.288749 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.434067 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-scripts\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.434782 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-log-httpd\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.434824 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-config-data\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.434863 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68sl5\" (UniqueName: \"kubernetes.io/projected/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-kube-api-access-68sl5\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.434974 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-run-httpd\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.434995 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-combined-ca-bundle\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.435029 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-sg-core-conf-yaml\") pod \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\" (UID: \"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.436388 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.436704 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.453771 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-scripts" (OuterVolumeSpecName: "scripts") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.454166 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-kube-api-access-68sl5" (OuterVolumeSpecName: "kube-api-access-68sl5") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "kube-api-access-68sl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.537449 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.537478 4892 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.537492 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68sl5\" (UniqueName: \"kubernetes.io/projected/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-kube-api-access-68sl5\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.537502 4892 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.563264 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.583411 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.598122 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.642572 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca8ea69-b1df-4f64-b894-d1a33fedef9d-logs\") pod \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.642697 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-scripts\") pod \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.642793 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-config-data\") pod \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.642839 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca8ea69-b1df-4f64-b894-d1a33fedef9d-horizon-secret-key\") pod \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.642861 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drqm4\" (UniqueName: \"kubernetes.io/projected/eca8ea69-b1df-4f64-b894-d1a33fedef9d-kube-api-access-drqm4\") pod \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\" (UID: \"eca8ea69-b1df-4f64-b894-d1a33fedef9d\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.643441 4892 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.644421 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca8ea69-b1df-4f64-b894-d1a33fedef9d-logs" (OuterVolumeSpecName: "logs") pod "eca8ea69-b1df-4f64-b894-d1a33fedef9d" (UID: "eca8ea69-b1df-4f64-b894-d1a33fedef9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.647767 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca8ea69-b1df-4f64-b894-d1a33fedef9d-kube-api-access-drqm4" (OuterVolumeSpecName: "kube-api-access-drqm4") pod "eca8ea69-b1df-4f64-b894-d1a33fedef9d" (UID: "eca8ea69-b1df-4f64-b894-d1a33fedef9d"). InnerVolumeSpecName "kube-api-access-drqm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.650970 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca8ea69-b1df-4f64-b894-d1a33fedef9d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "eca8ea69-b1df-4f64-b894-d1a33fedef9d" (UID: "eca8ea69-b1df-4f64-b894-d1a33fedef9d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.692144 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-scripts" (OuterVolumeSpecName: "scripts") pod "eca8ea69-b1df-4f64-b894-d1a33fedef9d" (UID: "eca8ea69-b1df-4f64-b894-d1a33fedef9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.694709 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-config-data" (OuterVolumeSpecName: "config-data") pod "eca8ea69-b1df-4f64-b894-d1a33fedef9d" (UID: "eca8ea69-b1df-4f64-b894-d1a33fedef9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.703213 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.716723 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.716856 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b1b8dc7-a488-447d-8c4a-21119f3e3dd4","Type":"ContainerDied","Data":"7c0318ac928ffd8156754d29d2f32055611528ba4b5714619766c7da822d3d1e"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.717000 4892 scope.go:117] "RemoveContainer" containerID="a012fa80b13ee587d26579452c5e50427e2ae4599e5536bf2c24c2b364a3bcb1" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.719002 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" event={"ID":"4d7e7ea0-d123-41ab-bd59-0f6da52316bd","Type":"ContainerStarted","Data":"b795d4f291515ad260bc5cec64fbe2fb6d8c98e23158951820cd9acfc3b59484"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.722692 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" event={"ID":"1a6b0877-2c23-4ebd-a433-620571e4c0bf","Type":"ContainerStarted","Data":"33fc983a6b33c6b6f3318d58883f3b32e2d6991e547a4b5e052979e80984d885"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.727458 4892 generic.go:334] "Generic (PLEG): container finished" podID="d02e1363-2043-4097-bda0-012158a0bf56" containerID="fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5" exitCode=137 Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.727500 4892 generic.go:334] "Generic (PLEG): container finished" podID="d02e1363-2043-4097-bda0-012158a0bf56" containerID="4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d" exitCode=137 Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.727550 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6778c569-gt8dk" event={"ID":"d02e1363-2043-4097-bda0-012158a0bf56","Type":"ContainerDied","Data":"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.727585 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6778c569-gt8dk" event={"ID":"d02e1363-2043-4097-bda0-012158a0bf56","Type":"ContainerDied","Data":"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.727601 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6778c569-gt8dk" event={"ID":"d02e1363-2043-4097-bda0-012158a0bf56","Type":"ContainerDied","Data":"dfee83298b4e9f748aa712dbf1e331f085cb40c4fab4178d42abd96fb24175a1"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.727668 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6778c569-gt8dk" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.733643 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" event={"ID":"a4609a5c-a8a3-4516-82db-e66273379720","Type":"ContainerStarted","Data":"4a1fbb875a7cbe42c04e4f5d1d22c48f403b74ffbf539eba43bf3ea73e161559"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.734704 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.739182 4892 generic.go:334] "Generic (PLEG): container finished" podID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerID="f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27" exitCode=137 Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.740014 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dc8c68f7-bpxpg" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.741459 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dc8c68f7-bpxpg" event={"ID":"eca8ea69-b1df-4f64-b894-d1a33fedef9d","Type":"ContainerDied","Data":"f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.741517 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dc8c68f7-bpxpg" event={"ID":"eca8ea69-b1df-4f64-b894-d1a33fedef9d","Type":"ContainerDied","Data":"8df3b20295458f184b31a6dc5a61e592ab63ddb756080bf47c295d9cc731ceaa"} Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.744685 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-scripts\") pod \"d02e1363-2043-4097-bda0-012158a0bf56\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.744795 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e1363-2043-4097-bda0-012158a0bf56-logs\") pod \"d02e1363-2043-4097-bda0-012158a0bf56\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.744856 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft4q2\" (UniqueName: \"kubernetes.io/projected/d02e1363-2043-4097-bda0-012158a0bf56-kube-api-access-ft4q2\") pod \"d02e1363-2043-4097-bda0-012158a0bf56\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.744874 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-config-data\") pod \"d02e1363-2043-4097-bda0-012158a0bf56\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.744916 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02e1363-2043-4097-bda0-012158a0bf56-horizon-secret-key\") pod \"d02e1363-2043-4097-bda0-012158a0bf56\" (UID: \"d02e1363-2043-4097-bda0-012158a0bf56\") " Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.745230 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.745246 4892 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/eca8ea69-b1df-4f64-b894-d1a33fedef9d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.745255 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drqm4\" (UniqueName: \"kubernetes.io/projected/eca8ea69-b1df-4f64-b894-d1a33fedef9d-kube-api-access-drqm4\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.745265 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca8ea69-b1df-4f64-b894-d1a33fedef9d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.745273 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.745294 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eca8ea69-b1df-4f64-b894-d1a33fedef9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.746762 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d02e1363-2043-4097-bda0-012158a0bf56-logs" (OuterVolumeSpecName: "logs") pod "d02e1363-2043-4097-bda0-012158a0bf56" (UID: "d02e1363-2043-4097-bda0-012158a0bf56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.755749 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02e1363-2043-4097-bda0-012158a0bf56-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d02e1363-2043-4097-bda0-012158a0bf56" (UID: "d02e1363-2043-4097-bda0-012158a0bf56"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.756124 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-config-data" (OuterVolumeSpecName: "config-data") pod "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" (UID: "0b1b8dc7-a488-447d-8c4a-21119f3e3dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.764798 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" podStartSLOduration=4.764750481 podStartE2EDuration="4.764750481s" podCreationTimestamp="2026-01-22 09:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:52.753266515 +0000 UTC m=+1102.597345578" watchObservedRunningTime="2026-01-22 09:28:52.764750481 +0000 UTC m=+1102.608829544" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.765955 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02e1363-2043-4097-bda0-012158a0bf56-kube-api-access-ft4q2" (OuterVolumeSpecName: "kube-api-access-ft4q2") pod "d02e1363-2043-4097-bda0-012158a0bf56" (UID: "d02e1363-2043-4097-bda0-012158a0bf56"). InnerVolumeSpecName "kube-api-access-ft4q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.773663 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-scripts" (OuterVolumeSpecName: "scripts") pod "d02e1363-2043-4097-bda0-012158a0bf56" (UID: "d02e1363-2043-4097-bda0-012158a0bf56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.781444 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-config-data" (OuterVolumeSpecName: "config-data") pod "d02e1363-2043-4097-bda0-012158a0bf56" (UID: "d02e1363-2043-4097-bda0-012158a0bf56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.846875 4892 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02e1363-2043-4097-bda0-012158a0bf56-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.846901 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.846910 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e1363-2043-4097-bda0-012158a0bf56-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.846918 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.846926 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft4q2\" (UniqueName: \"kubernetes.io/projected/d02e1363-2043-4097-bda0-012158a0bf56-kube-api-access-ft4q2\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:52 crc kubenswrapper[4892]: I0122 09:28:52.846936 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02e1363-2043-4097-bda0-012158a0bf56-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.042427 4892 scope.go:117] "RemoveContainer" containerID="2c7775e717dd114b82af204da201ff6ca2f17a0cb0e04dea261967b6e52e8d47" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.095190 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75dc8c68f7-bpxpg"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.096971 4892 scope.go:117] "RemoveContainer" containerID="d6b4ec33cb48d062ea41f639919a5b3e17d94fe4ffe83d9b4bde6b7be77d52cd" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.107368 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75dc8c68f7-bpxpg"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.132092 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.135581 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.143707 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6778c569-gt8dk"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155111 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155498 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerName="init" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155512 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerName="init" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155525 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon-log" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155531 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon-log" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155545 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="sg-core" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155551 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="sg-core" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155558 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155564 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155571 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155577 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155592 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerName="dnsmasq-dns" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155598 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerName="dnsmasq-dns" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155606 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="ceilometer-notification-agent" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155613 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="ceilometer-notification-agent" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155622 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon-log" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155627 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon-log" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.155648 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="proxy-httpd" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155653 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="proxy-httpd" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155810 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="ceilometer-notification-agent" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155820 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155830 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon-log" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155840 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02e1363-2043-4097-bda0-012158a0bf56" containerName="horizon" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155854 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="proxy-httpd" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155862 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" containerName="sg-core" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155870 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" containerName="horizon-log" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.155878 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3bd5d3-0b01-47ff-8410-95fd1e351060" containerName="dnsmasq-dns" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.157788 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.162074 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.162196 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.172747 4892 scope.go:117] "RemoveContainer" containerID="fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.191434 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b6778c569-gt8dk"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.193795 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.200880 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261004 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-config-data\") pod \"c3b473db-e057-4d55-b3e6-171e8618722f\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261193 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-scripts\") pod \"c3b473db-e057-4d55-b3e6-171e8618722f\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261226 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c3b473db-e057-4d55-b3e6-171e8618722f-horizon-secret-key\") pod \"c3b473db-e057-4d55-b3e6-171e8618722f\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261252 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqhps\" (UniqueName: \"kubernetes.io/projected/c3b473db-e057-4d55-b3e6-171e8618722f-kube-api-access-kqhps\") pod \"c3b473db-e057-4d55-b3e6-171e8618722f\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261276 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b473db-e057-4d55-b3e6-171e8618722f-logs\") pod \"c3b473db-e057-4d55-b3e6-171e8618722f\" (UID: \"c3b473db-e057-4d55-b3e6-171e8618722f\") " Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261549 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-config-data\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261590 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-log-httpd\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261609 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261631 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261663 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-run-httpd\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261678 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-scripts\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.261717 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7h6d\" (UniqueName: \"kubernetes.io/projected/64acec33-a0ea-4b7f-b7dd-ae704b047a95-kube-api-access-m7h6d\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.262413 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b473db-e057-4d55-b3e6-171e8618722f-logs" (OuterVolumeSpecName: "logs") pod "c3b473db-e057-4d55-b3e6-171e8618722f" (UID: "c3b473db-e057-4d55-b3e6-171e8618722f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.268875 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b473db-e057-4d55-b3e6-171e8618722f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c3b473db-e057-4d55-b3e6-171e8618722f" (UID: "c3b473db-e057-4d55-b3e6-171e8618722f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.271962 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b473db-e057-4d55-b3e6-171e8618722f-kube-api-access-kqhps" (OuterVolumeSpecName: "kube-api-access-kqhps") pod "c3b473db-e057-4d55-b3e6-171e8618722f" (UID: "c3b473db-e057-4d55-b3e6-171e8618722f"). InnerVolumeSpecName "kube-api-access-kqhps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.290943 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-config-data" (OuterVolumeSpecName: "config-data") pod "c3b473db-e057-4d55-b3e6-171e8618722f" (UID: "c3b473db-e057-4d55-b3e6-171e8618722f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.293813 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-scripts" (OuterVolumeSpecName: "scripts") pod "c3b473db-e057-4d55-b3e6-171e8618722f" (UID: "c3b473db-e057-4d55-b3e6-171e8618722f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363556 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7h6d\" (UniqueName: \"kubernetes.io/projected/64acec33-a0ea-4b7f-b7dd-ae704b047a95-kube-api-access-m7h6d\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363721 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-config-data\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363766 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-log-httpd\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363787 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363816 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363855 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-run-httpd\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363878 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-scripts\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363936 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363951 4892 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c3b473db-e057-4d55-b3e6-171e8618722f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363964 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqhps\" (UniqueName: \"kubernetes.io/projected/c3b473db-e057-4d55-b3e6-171e8618722f-kube-api-access-kqhps\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363975 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b473db-e057-4d55-b3e6-171e8618722f-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.363986 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3b473db-e057-4d55-b3e6-171e8618722f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.365390 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-run-httpd\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.365702 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-log-httpd\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.368214 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.369159 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-scripts\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.370942 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.371381 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-config-data\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.379234 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7h6d\" (UniqueName: \"kubernetes.io/projected/64acec33-a0ea-4b7f-b7dd-ae704b047a95-kube-api-access-m7h6d\") pod \"ceilometer-0\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.402790 4892 scope.go:117] "RemoveContainer" containerID="4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.429521 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1b8dc7-a488-447d-8c4a-21119f3e3dd4" path="/var/lib/kubelet/pods/0b1b8dc7-a488-447d-8c4a-21119f3e3dd4/volumes" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.431107 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02e1363-2043-4097-bda0-012158a0bf56" path="/var/lib/kubelet/pods/d02e1363-2043-4097-bda0-012158a0bf56/volumes" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.432559 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca8ea69-b1df-4f64-b894-d1a33fedef9d" path="/var/lib/kubelet/pods/eca8ea69-b1df-4f64-b894-d1a33fedef9d/volumes" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.459449 4892 scope.go:117] "RemoveContainer" containerID="fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.461259 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5\": container with ID starting with fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5 not found: ID does not exist" containerID="fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.462324 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5"} err="failed to get container status \"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5\": rpc error: code = NotFound desc = could not find container \"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5\": container with ID starting with fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5 not found: ID does not exist" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.462432 4892 scope.go:117] "RemoveContainer" containerID="4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.466471 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d\": container with ID starting with 4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d not found: ID does not exist" containerID="4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.466678 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d"} err="failed to get container status \"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d\": rpc error: code = NotFound desc = could not find container \"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d\": container with ID starting with 4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d not found: ID does not exist" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.466771 4892 scope.go:117] "RemoveContainer" containerID="fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.471070 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5"} err="failed to get container status \"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5\": rpc error: code = NotFound desc = could not find container \"fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5\": container with ID starting with fe4c78110a038a44d53d02ee5c6b45661aa807aa6742c35d586291b7fd57fca5 not found: ID does not exist" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.471118 4892 scope.go:117] "RemoveContainer" containerID="4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.472552 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d"} err="failed to get container status \"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d\": rpc error: code = NotFound desc = could not find container \"4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d\": container with ID starting with 4c7946be0ef7904033024a8768ed1cf521bdc88101ca19e49f9f4abae56c094d not found: ID does not exist" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.472610 4892 scope.go:117] "RemoveContainer" containerID="f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.488369 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.683626 4892 scope.go:117] "RemoveContainer" containerID="ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.707790 4892 scope.go:117] "RemoveContainer" containerID="f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.708851 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27\": container with ID starting with f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27 not found: ID does not exist" containerID="f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.708890 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27"} err="failed to get container status \"f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27\": rpc error: code = NotFound desc = could not find container \"f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27\": container with ID starting with f50283ea3aeaeadfb0c01695dc2a745e1231e8ab34f3d079c0466ba9ed5f2c27 not found: ID does not exist" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.708916 4892 scope.go:117] "RemoveContainer" containerID="ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2" Jan 22 09:28:53 crc kubenswrapper[4892]: E0122 09:28:53.709161 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2\": container with ID starting with ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2 not found: ID does not exist" containerID="ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.709189 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2"} err="failed to get container status \"ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2\": rpc error: code = NotFound desc = could not find container \"ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2\": container with ID starting with ac30620e4a7441c5bf8b9df49d535cd8d829ef64737ab2a5bbf263692d347eb2 not found: ID does not exist" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.769525 4892 generic.go:334] "Generic (PLEG): container finished" podID="c3b473db-e057-4d55-b3e6-171e8618722f" containerID="bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9" exitCode=137 Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.769601 4892 generic.go:334] "Generic (PLEG): container finished" podID="c3b473db-e057-4d55-b3e6-171e8618722f" containerID="a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468" exitCode=137 Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.770031 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7db6cccc7c-d7w6r" event={"ID":"c3b473db-e057-4d55-b3e6-171e8618722f","Type":"ContainerDied","Data":"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.770929 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7db6cccc7c-d7w6r" event={"ID":"c3b473db-e057-4d55-b3e6-171e8618722f","Type":"ContainerDied","Data":"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.770999 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7db6cccc7c-d7w6r" event={"ID":"c3b473db-e057-4d55-b3e6-171e8618722f","Type":"ContainerDied","Data":"66ba7deb8bf8b0a681b55f0ee5bd4e1f020883dcc6283c970d11f55280ad0bf5"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.771062 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7db6cccc7c-d7w6r" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.771073 4892 scope.go:117] "RemoveContainer" containerID="bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.778158 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" event={"ID":"4d7e7ea0-d123-41ab-bd59-0f6da52316bd","Type":"ContainerStarted","Data":"f03cfe1f878e9ae65ea1e26caa907e71d041dcb3d3156b3bf104a09b1230ec1a"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.787184 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" event={"ID":"1a6b0877-2c23-4ebd-a433-620571e4c0bf","Type":"ContainerStarted","Data":"fd5128717b3cc9cf0e07695eeaac1c90b5b1158ba6ee245fe7d4d3b892871527"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.800255 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79d3567b-bda5-4ea2-9e1d-24f617405a38","Type":"ContainerStarted","Data":"fbbaa2bb002ebd669e13c65d11c7317ab84a8a6458ad0953fdb656539cee30b6"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.800473 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api-log" containerID="cri-o://5ca976d5f619a9875e225b06d8f339f67814d869403c739b6b06b7293b1786db" gracePeriod=30 Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.800739 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.800769 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api" containerID="cri-o://fbbaa2bb002ebd669e13c65d11c7317ab84a8a6458ad0953fdb656539cee30b6" gracePeriod=30 Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.806228 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cbdbc497d-dqskw" podStartSLOduration=3.672626813 podStartE2EDuration="6.80621427s" podCreationTimestamp="2026-01-22 09:28:47 +0000 UTC" firstStartedPulling="2026-01-22 09:28:49.088348868 +0000 UTC m=+1098.932427931" lastFinishedPulling="2026-01-22 09:28:52.221936325 +0000 UTC m=+1102.066015388" observedRunningTime="2026-01-22 09:28:53.801061925 +0000 UTC m=+1103.645140988" watchObservedRunningTime="2026-01-22 09:28:53.80621427 +0000 UTC m=+1103.650293333" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.807879 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8","Type":"ContainerStarted","Data":"5850edd1813fc7859f01aca76116bec2c8001a8d33ae4d5c7091ac47b247a01e"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.807952 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8","Type":"ContainerStarted","Data":"ba9c2ad86431c25089da124e1b16306a2b89938ba7ad09e796ef7860773bca4d"} Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.824466 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.824450673 podStartE2EDuration="5.824450673s" podCreationTimestamp="2026-01-22 09:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:53.822711771 +0000 UTC m=+1103.666790834" watchObservedRunningTime="2026-01-22 09:28:53.824450673 +0000 UTC m=+1103.668529736" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.854914 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7c7f45b4bf-xx9p2" podStartSLOduration=3.72944473 podStartE2EDuration="6.854899952s" podCreationTimestamp="2026-01-22 09:28:47 +0000 UTC" firstStartedPulling="2026-01-22 09:28:49.199903923 +0000 UTC m=+1099.043982986" lastFinishedPulling="2026-01-22 09:28:52.325359145 +0000 UTC m=+1102.169438208" observedRunningTime="2026-01-22 09:28:53.850082495 +0000 UTC m=+1103.694161578" watchObservedRunningTime="2026-01-22 09:28:53.854899952 +0000 UTC m=+1103.698979015" Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.876787 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7db6cccc7c-d7w6r"] Jan 22 09:28:53 crc kubenswrapper[4892]: I0122 09:28:53.889757 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7db6cccc7c-d7w6r"] Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.022916 4892 scope.go:117] "RemoveContainer" containerID="a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.135085 4892 scope.go:117] "RemoveContainer" containerID="bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9" Jan 22 09:28:54 crc kubenswrapper[4892]: E0122 09:28:54.135698 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9\": container with ID starting with bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9 not found: ID does not exist" containerID="bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.135766 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9"} err="failed to get container status \"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9\": rpc error: code = NotFound desc = could not find container \"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9\": container with ID starting with bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9 not found: ID does not exist" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.135817 4892 scope.go:117] "RemoveContainer" containerID="a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468" Jan 22 09:28:54 crc kubenswrapper[4892]: E0122 09:28:54.139577 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468\": container with ID starting with a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468 not found: ID does not exist" containerID="a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.139609 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468"} err="failed to get container status \"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468\": rpc error: code = NotFound desc = could not find container \"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468\": container with ID starting with a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468 not found: ID does not exist" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.139633 4892 scope.go:117] "RemoveContainer" containerID="bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.145192 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9"} err="failed to get container status \"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9\": rpc error: code = NotFound desc = could not find container \"bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9\": container with ID starting with bf44bcedaa1c0496a00b680d7b533f93ba802aeb75351a460d0a4d75a60777c9 not found: ID does not exist" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.145244 4892 scope.go:117] "RemoveContainer" containerID="a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.145965 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468"} err="failed to get container status \"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468\": rpc error: code = NotFound desc = could not find container \"a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468\": container with ID starting with a72a66a408f5cdf755796c6059cfdc114c9be6f246a49fdad645f6179b68e468 not found: ID does not exist" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.242073 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.611416 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59ddd484c6-7p5xf"] Jan 22 09:28:54 crc kubenswrapper[4892]: E0122 09:28:54.611757 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.611770 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon" Jan 22 09:28:54 crc kubenswrapper[4892]: E0122 09:28:54.611794 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon-log" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.611800 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon-log" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.612199 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon-log" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.612224 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" containerName="horizon" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.613124 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.626965 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.627190 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.685980 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59ddd484c6-7p5xf"] Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696209 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-public-tls-certs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696276 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-config-data\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696334 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-internal-tls-certs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696359 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-combined-ca-bundle\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696377 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fm4h\" (UniqueName: \"kubernetes.io/projected/b812f439-988c-4120-8b36-e21df38c2b97-kube-api-access-2fm4h\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696412 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b812f439-988c-4120-8b36-e21df38c2b97-logs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.696456 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-config-data-custom\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.807794 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-internal-tls-certs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.807873 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-combined-ca-bundle\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.807901 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fm4h\" (UniqueName: \"kubernetes.io/projected/b812f439-988c-4120-8b36-e21df38c2b97-kube-api-access-2fm4h\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.808021 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b812f439-988c-4120-8b36-e21df38c2b97-logs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.808146 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-config-data-custom\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.808240 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-public-tls-certs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.808332 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-config-data\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.809812 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b812f439-988c-4120-8b36-e21df38c2b97-logs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.824765 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-public-tls-certs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.826069 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-config-data\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.826279 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-internal-tls-certs\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.837919 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-combined-ca-bundle\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.838958 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b812f439-988c-4120-8b36-e21df38c2b97-config-data-custom\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.857799 4892 generic.go:334] "Generic (PLEG): container finished" podID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerID="fbbaa2bb002ebd669e13c65d11c7317ab84a8a6458ad0953fdb656539cee30b6" exitCode=0 Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.857827 4892 generic.go:334] "Generic (PLEG): container finished" podID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerID="5ca976d5f619a9875e225b06d8f339f67814d869403c739b6b06b7293b1786db" exitCode=143 Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.857898 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79d3567b-bda5-4ea2-9e1d-24f617405a38","Type":"ContainerDied","Data":"fbbaa2bb002ebd669e13c65d11c7317ab84a8a6458ad0953fdb656539cee30b6"} Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.857925 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79d3567b-bda5-4ea2-9e1d-24f617405a38","Type":"ContainerDied","Data":"5ca976d5f619a9875e225b06d8f339f67814d869403c739b6b06b7293b1786db"} Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.862509 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fm4h\" (UniqueName: \"kubernetes.io/projected/b812f439-988c-4120-8b36-e21df38c2b97-kube-api-access-2fm4h\") pod \"barbican-api-59ddd484c6-7p5xf\" (UID: \"b812f439-988c-4120-8b36-e21df38c2b97\") " pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.881755 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerStarted","Data":"b0bc76740f92ec6b530e02a616b989e6d54a84d037af5f704cb931e4d9230009"} Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.941878 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:54 crc kubenswrapper[4892]: I0122 09:28:54.958198 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.162662557 podStartE2EDuration="6.958177551s" podCreationTimestamp="2026-01-22 09:28:48 +0000 UTC" firstStartedPulling="2026-01-22 09:28:49.183709004 +0000 UTC m=+1099.027788067" lastFinishedPulling="2026-01-22 09:28:50.979223998 +0000 UTC m=+1100.823303061" observedRunningTime="2026-01-22 09:28:54.92186634 +0000 UTC m=+1104.765945403" watchObservedRunningTime="2026-01-22 09:28:54.958177551 +0000 UTC m=+1104.802256614" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.096925 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230602 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d3567b-bda5-4ea2-9e1d-24f617405a38-logs\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230662 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230793 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-combined-ca-bundle\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230842 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnfsp\" (UniqueName: \"kubernetes.io/projected/79d3567b-bda5-4ea2-9e1d-24f617405a38-kube-api-access-tnfsp\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230877 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data-custom\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230918 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-scripts\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.230967 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79d3567b-bda5-4ea2-9e1d-24f617405a38-etc-machine-id\") pod \"79d3567b-bda5-4ea2-9e1d-24f617405a38\" (UID: \"79d3567b-bda5-4ea2-9e1d-24f617405a38\") " Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.231627 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79d3567b-bda5-4ea2-9e1d-24f617405a38-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.231986 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d3567b-bda5-4ea2-9e1d-24f617405a38-logs" (OuterVolumeSpecName: "logs") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.240077 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d3567b-bda5-4ea2-9e1d-24f617405a38-kube-api-access-tnfsp" (OuterVolumeSpecName: "kube-api-access-tnfsp") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "kube-api-access-tnfsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.241012 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-scripts" (OuterVolumeSpecName: "scripts") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.253938 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.322419 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data" (OuterVolumeSpecName: "config-data") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.328378 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d3567b-bda5-4ea2-9e1d-24f617405a38" (UID: "79d3567b-bda5-4ea2-9e1d-24f617405a38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333102 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d3567b-bda5-4ea2-9e1d-24f617405a38-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333138 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333150 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333163 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnfsp\" (UniqueName: \"kubernetes.io/projected/79d3567b-bda5-4ea2-9e1d-24f617405a38-kube-api-access-tnfsp\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333174 4892 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333184 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d3567b-bda5-4ea2-9e1d-24f617405a38-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.333193 4892 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79d3567b-bda5-4ea2-9e1d-24f617405a38-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.430052 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b473db-e057-4d55-b3e6-171e8618722f" path="/var/lib/kubelet/pods/c3b473db-e057-4d55-b3e6-171e8618722f/volumes" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.544699 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59ddd484c6-7p5xf"] Jan 22 09:28:55 crc kubenswrapper[4892]: W0122 09:28:55.554889 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb812f439_988c_4120_8b36_e21df38c2b97.slice/crio-0f26eaee8bb794381f8e3faa6a653b5dc3d7b69a3e94476b308422eb183defd4 WatchSource:0}: Error finding container 0f26eaee8bb794381f8e3faa6a653b5dc3d7b69a3e94476b308422eb183defd4: Status 404 returned error can't find the container with id 0f26eaee8bb794381f8e3faa6a653b5dc3d7b69a3e94476b308422eb183defd4 Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.890709 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59ddd484c6-7p5xf" event={"ID":"b812f439-988c-4120-8b36-e21df38c2b97","Type":"ContainerStarted","Data":"0f26eaee8bb794381f8e3faa6a653b5dc3d7b69a3e94476b308422eb183defd4"} Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.893156 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerStarted","Data":"779ed22187101d315e9196eabd0c32bf707f10f7a60f267f799195fd44b36e3c"} Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.895276 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79d3567b-bda5-4ea2-9e1d-24f617405a38","Type":"ContainerDied","Data":"8ce11f6cc97188658c38fc3793ea2f2ab7dc20f90f14e13b4ecf74fb759b68f8"} Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.895327 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.895338 4892 scope.go:117] "RemoveContainer" containerID="fbbaa2bb002ebd669e13c65d11c7317ab84a8a6458ad0953fdb656539cee30b6" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.930045 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.930588 4892 scope.go:117] "RemoveContainer" containerID="5ca976d5f619a9875e225b06d8f339f67814d869403c739b6b06b7293b1786db" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.936968 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.949595 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:55 crc kubenswrapper[4892]: E0122 09:28:55.949907 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api-log" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.949923 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api-log" Jan 22 09:28:55 crc kubenswrapper[4892]: E0122 09:28:55.949949 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.949956 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.950146 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api-log" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.950161 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" containerName="cinder-api" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.951053 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.955420 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.955484 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.962492 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 09:28:55 crc kubenswrapper[4892]: I0122 09:28:55.962567 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044388 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044428 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05319583-8c6d-43a9-88b6-1cba9781f85b-logs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044482 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044563 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-config-data-custom\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044623 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7n9d\" (UniqueName: \"kubernetes.io/projected/05319583-8c6d-43a9-88b6-1cba9781f85b-kube-api-access-m7n9d\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044647 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-config-data\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044698 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044719 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05319583-8c6d-43a9-88b6-1cba9781f85b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.044755 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-scripts\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147364 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147446 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-config-data-custom\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147482 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7n9d\" (UniqueName: \"kubernetes.io/projected/05319583-8c6d-43a9-88b6-1cba9781f85b-kube-api-access-m7n9d\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147505 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-config-data\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147556 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147579 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05319583-8c6d-43a9-88b6-1cba9781f85b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147613 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-scripts\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147674 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.147695 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05319583-8c6d-43a9-88b6-1cba9781f85b-logs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.148143 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05319583-8c6d-43a9-88b6-1cba9781f85b-logs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.148663 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05319583-8c6d-43a9-88b6-1cba9781f85b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.153984 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.154328 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.155541 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-config-data\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.155901 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-scripts\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.157808 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-config-data-custom\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.163334 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05319583-8c6d-43a9-88b6-1cba9781f85b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.164738 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7n9d\" (UniqueName: \"kubernetes.io/projected/05319583-8c6d-43a9-88b6-1cba9781f85b-kube-api-access-m7n9d\") pod \"cinder-api-0\" (UID: \"05319583-8c6d-43a9-88b6-1cba9781f85b\") " pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.266027 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.725803 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.860600 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.906408 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05319583-8c6d-43a9-88b6-1cba9781f85b","Type":"ContainerStarted","Data":"6b54ffc23ed75c6fdbb3ab5cb468cc3b942b2ac72daec78fae1186a88b9755b0"} Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.910688 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59ddd484c6-7p5xf" event={"ID":"b812f439-988c-4120-8b36-e21df38c2b97","Type":"ContainerStarted","Data":"7e4272d6ae28ba80b2e489bc195e803ebf344c34c094a6e09d894e461a6b0459"} Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.910723 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59ddd484c6-7p5xf" event={"ID":"b812f439-988c-4120-8b36-e21df38c2b97","Type":"ContainerStarted","Data":"c4afc45d1fde3edcae755209f44363c0bd942356f6a345a0841295f2a9d1df18"} Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.910852 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.911146 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.920695 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerStarted","Data":"b3bec6e9894c7cd6ad1ce0be1603747f327c8c4e715957d82111fcafaa304e32"} Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.920752 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerStarted","Data":"0f416c07896322b43443694c5c94a4a32ecdc0e4d223f3ec89e702468fd1fe8f"} Jan 22 09:28:56 crc kubenswrapper[4892]: I0122 09:28:56.936063 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59ddd484c6-7p5xf" podStartSLOduration=2.93603945 podStartE2EDuration="2.93603945s" podCreationTimestamp="2026-01-22 09:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:56.929098691 +0000 UTC m=+1106.773177754" watchObservedRunningTime="2026-01-22 09:28:56.93603945 +0000 UTC m=+1106.780118513" Jan 22 09:28:57 crc kubenswrapper[4892]: I0122 09:28:57.081049 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:57 crc kubenswrapper[4892]: I0122 09:28:57.428606 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d3567b-bda5-4ea2-9e1d-24f617405a38" path="/var/lib/kubelet/pods/79d3567b-bda5-4ea2-9e1d-24f617405a38/volumes" Jan 22 09:28:57 crc kubenswrapper[4892]: I0122 09:28:57.934162 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05319583-8c6d-43a9-88b6-1cba9781f85b","Type":"ContainerStarted","Data":"38f187f77b7caec7b2d69ef7a2f92f9e7ba00c075c81f03ea14d760928c37037"} Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.439703 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.609739 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.716891 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.821912 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.874175 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-vjc2j"] Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.874410 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerName="dnsmasq-dns" containerID="cri-o://a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c" gracePeriod=10 Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.944752 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05319583-8c6d-43a9-88b6-1cba9781f85b","Type":"ContainerStarted","Data":"b1e5fe94c9461ed6320033a1f7f9bb7edd9f39b06ff9c7f70d9e066c1a6cf5c8"} Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.945394 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.955817 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerStarted","Data":"fee122524d788c74233f0530d6672b2c001393b745996bd9552612a9de378a92"} Jan 22 09:28:58 crc kubenswrapper[4892]: I0122 09:28:58.955942 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.027122 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.027102845 podStartE2EDuration="4.027102845s" podCreationTimestamp="2026-01-22 09:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:28:58.973654008 +0000 UTC m=+1108.817733061" watchObservedRunningTime="2026-01-22 09:28:59.027102845 +0000 UTC m=+1108.871181908" Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.056304 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.142345 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7bd8749ddb-x9h4l" Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.164583 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.694616146 podStartE2EDuration="6.164562632s" podCreationTimestamp="2026-01-22 09:28:53 +0000 UTC" firstStartedPulling="2026-01-22 09:28:54.25321463 +0000 UTC m=+1104.097293693" lastFinishedPulling="2026-01-22 09:28:57.723161116 +0000 UTC m=+1107.567240179" observedRunningTime="2026-01-22 09:28:59.069703129 +0000 UTC m=+1108.913782192" watchObservedRunningTime="2026-01-22 09:28:59.164562632 +0000 UTC m=+1109.008641705" Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.203907 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79777d5484-zk25q"] Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.204107 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon-log" containerID="cri-o://29a40a09756cba7b3751e000919e2a9027e341f2b2d60abf1bab59b34c64bafb" gracePeriod=30 Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.204560 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" containerID="cri-o://a517a1366ff75b3114aba08a5cd170cff1a9a46111a0373888655bd0a308fa5e" gracePeriod=30 Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.934978 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.986314 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerID="a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c" exitCode=0 Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.986561 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="cinder-scheduler" containerID="cri-o://ba9c2ad86431c25089da124e1b16306a2b89938ba7ad09e796ef7860773bca4d" gracePeriod=30 Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.986678 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.987059 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="probe" containerID="cri-o://5850edd1813fc7859f01aca76116bec2c8001a8d33ae4d5c7091ac47b247a01e" gracePeriod=30 Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.987091 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" event={"ID":"ef0903d9-36f7-40fd-a9ef-5688e7030688","Type":"ContainerDied","Data":"a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c"} Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.987161 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-vjc2j" event={"ID":"ef0903d9-36f7-40fd-a9ef-5688e7030688","Type":"ContainerDied","Data":"eb2e504baa4f2ae79e9fa37201edf21b87a64b47f6d03eeb7e9d10a6185d0902"} Jan 22 09:28:59 crc kubenswrapper[4892]: I0122 09:28:59.987182 4892 scope.go:117] "RemoveContainer" containerID="a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.020148 4892 scope.go:117] "RemoveContainer" containerID="93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.034076 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.034165 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-config\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.034274 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6tkf\" (UniqueName: \"kubernetes.io/projected/ef0903d9-36f7-40fd-a9ef-5688e7030688-kube-api-access-s6tkf\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.034362 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-nb\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.034389 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-svc\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.034444 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-swift-storage-0\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.047562 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0903d9-36f7-40fd-a9ef-5688e7030688-kube-api-access-s6tkf" (OuterVolumeSpecName: "kube-api-access-s6tkf") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "kube-api-access-s6tkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.097185 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.103616 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-config" (OuterVolumeSpecName: "config") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.104758 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.109374 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.135686 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.136529 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb\") pod \"ef0903d9-36f7-40fd-a9ef-5688e7030688\" (UID: \"ef0903d9-36f7-40fd-a9ef-5688e7030688\") " Jan 22 09:29:00 crc kubenswrapper[4892]: W0122 09:29:00.136827 4892 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ef0903d9-36f7-40fd-a9ef-5688e7030688/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.136897 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef0903d9-36f7-40fd-a9ef-5688e7030688" (UID: "ef0903d9-36f7-40fd-a9ef-5688e7030688"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.137338 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.137411 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.137541 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.137602 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6tkf\" (UniqueName: \"kubernetes.io/projected/ef0903d9-36f7-40fd-a9ef-5688e7030688-kube-api-access-s6tkf\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.137660 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.137711 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef0903d9-36f7-40fd-a9ef-5688e7030688-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.213423 4892 scope.go:117] "RemoveContainer" containerID="a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c" Jan 22 09:29:00 crc kubenswrapper[4892]: E0122 09:29:00.213903 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c\": container with ID starting with a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c not found: ID does not exist" containerID="a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.213942 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c"} err="failed to get container status \"a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c\": rpc error: code = NotFound desc = could not find container \"a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c\": container with ID starting with a42a4ba9a9178a83efcedc9dee4ec62cd509815212de8afd7cf796f95d4d348c not found: ID does not exist" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.213974 4892 scope.go:117] "RemoveContainer" containerID="93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de" Jan 22 09:29:00 crc kubenswrapper[4892]: E0122 09:29:00.214485 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de\": container with ID starting with 93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de not found: ID does not exist" containerID="93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.214514 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de"} err="failed to get container status \"93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de\": rpc error: code = NotFound desc = could not find container \"93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de\": container with ID starting with 93473b14492812102132acd7b64b2c9421677b996c4391cd59926196136b65de not found: ID does not exist" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.319088 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-vjc2j"] Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.329034 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-vjc2j"] Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.511948 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:29:00 crc kubenswrapper[4892]: I0122 09:29:00.667680 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.001481 4892 generic.go:334] "Generic (PLEG): container finished" podID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerID="5850edd1813fc7859f01aca76116bec2c8001a8d33ae4d5c7091ac47b247a01e" exitCode=0 Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.001515 4892 generic.go:334] "Generic (PLEG): container finished" podID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerID="ba9c2ad86431c25089da124e1b16306a2b89938ba7ad09e796ef7860773bca4d" exitCode=0 Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.001791 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8","Type":"ContainerDied","Data":"5850edd1813fc7859f01aca76116bec2c8001a8d33ae4d5c7091ac47b247a01e"} Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.001839 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8","Type":"ContainerDied","Data":"ba9c2ad86431c25089da124e1b16306a2b89938ba7ad09e796ef7860773bca4d"} Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.428914 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" path="/var/lib/kubelet/pods/ef0903d9-36f7-40fd-a9ef-5688e7030688/volumes" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.496217 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.511568 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-scripts\") pod \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.511663 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjxtr\" (UniqueName: \"kubernetes.io/projected/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-kube-api-access-pjxtr\") pod \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.511711 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-combined-ca-bundle\") pod \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.511748 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data-custom\") pod \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.511778 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data\") pod \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.511830 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-etc-machine-id\") pod \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\" (UID: \"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8\") " Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.513592 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" (UID: "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.517236 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-scripts" (OuterVolumeSpecName: "scripts") pod "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" (UID: "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.519463 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-kube-api-access-pjxtr" (OuterVolumeSpecName: "kube-api-access-pjxtr") pod "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" (UID: "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8"). InnerVolumeSpecName "kube-api-access-pjxtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.538418 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" (UID: "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.598671 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" (UID: "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.615156 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjxtr\" (UniqueName: \"kubernetes.io/projected/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-kube-api-access-pjxtr\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.615192 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.615201 4892 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.615210 4892 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.615219 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.646370 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data" (OuterVolumeSpecName: "config-data") pod "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" (UID: "9d53dda1-677a-48d6-9416-bc4d4f2c3cf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:01 crc kubenswrapper[4892]: I0122 09:29:01.716422 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.013177 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d53dda1-677a-48d6-9416-bc4d4f2c3cf8","Type":"ContainerDied","Data":"7b2bb37aa017be70c973f2aa84afab2e79b1f1c91ee33d188644afa2b27ade66"} Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.013228 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.013554 4892 scope.go:117] "RemoveContainer" containerID="5850edd1813fc7859f01aca76116bec2c8001a8d33ae4d5c7091ac47b247a01e" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.058555 4892 scope.go:117] "RemoveContainer" containerID="ba9c2ad86431c25089da124e1b16306a2b89938ba7ad09e796ef7860773bca4d" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.119709 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.132552 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.143664 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:29:02 crc kubenswrapper[4892]: E0122 09:29:02.144134 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerName="dnsmasq-dns" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144152 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerName="dnsmasq-dns" Jan 22 09:29:02 crc kubenswrapper[4892]: E0122 09:29:02.144175 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="cinder-scheduler" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144181 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="cinder-scheduler" Jan 22 09:29:02 crc kubenswrapper[4892]: E0122 09:29:02.144200 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerName="init" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144208 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerName="init" Jan 22 09:29:02 crc kubenswrapper[4892]: E0122 09:29:02.144223 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="probe" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144230 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="probe" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144440 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0903d9-36f7-40fd-a9ef-5688e7030688" containerName="dnsmasq-dns" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144456 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="cinder-scheduler" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.144467 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" containerName="probe" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.147416 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.149358 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.164402 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.222078 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.222178 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-config-data\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.222210 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.222255 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhpl\" (UniqueName: \"kubernetes.io/projected/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-kube-api-access-xdhpl\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.222389 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-scripts\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.222436 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: E0122 09:29:02.267247 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d53dda1_677a_48d6_9416_bc4d4f2c3cf8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d53dda1_677a_48d6_9416_bc4d4f2c3cf8.slice/crio-7b2bb37aa017be70c973f2aa84afab2e79b1f1c91ee33d188644afa2b27ade66\": RecentStats: unable to find data in memory cache]" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.323799 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.323880 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-config-data\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.323912 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.323939 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.323958 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhpl\" (UniqueName: \"kubernetes.io/projected/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-kube-api-access-xdhpl\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.324061 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-scripts\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.324087 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.328628 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.329631 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.330185 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-config-data\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.336710 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-scripts\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.339213 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhpl\" (UniqueName: \"kubernetes.io/projected/808833e1-7e58-4b7e-a1bb-ff5cc72b5b35-kube-api-access-xdhpl\") pod \"cinder-scheduler-0\" (UID: \"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35\") " pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.465977 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:29:02 crc kubenswrapper[4892]: W0122 09:29:02.905802 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod808833e1_7e58_4b7e_a1bb_ff5cc72b5b35.slice/crio-04c76e1820d2bcf5de22c63d2b61257e0c25048801fca01ae8e2ec4110cdd721 WatchSource:0}: Error finding container 04c76e1820d2bcf5de22c63d2b61257e0c25048801fca01ae8e2ec4110cdd721: Status 404 returned error can't find the container with id 04c76e1820d2bcf5de22c63d2b61257e0c25048801fca01ae8e2ec4110cdd721 Jan 22 09:29:02 crc kubenswrapper[4892]: I0122 09:29:02.911315 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:29:03 crc kubenswrapper[4892]: I0122 09:29:03.042137 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35","Type":"ContainerStarted","Data":"04c76e1820d2bcf5de22c63d2b61257e0c25048801fca01ae8e2ec4110cdd721"} Jan 22 09:29:03 crc kubenswrapper[4892]: I0122 09:29:03.045565 4892 generic.go:334] "Generic (PLEG): container finished" podID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerID="a517a1366ff75b3114aba08a5cd170cff1a9a46111a0373888655bd0a308fa5e" exitCode=0 Jan 22 09:29:03 crc kubenswrapper[4892]: I0122 09:29:03.045692 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79777d5484-zk25q" event={"ID":"c837fca4-ae2c-43fd-850c-f2aca8331d27","Type":"ContainerDied","Data":"a517a1366ff75b3114aba08a5cd170cff1a9a46111a0373888655bd0a308fa5e"} Jan 22 09:29:03 crc kubenswrapper[4892]: I0122 09:29:03.429614 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d53dda1-677a-48d6-9416-bc4d4f2c3cf8" path="/var/lib/kubelet/pods/9d53dda1-677a-48d6-9416-bc4d4f2c3cf8/volumes" Jan 22 09:29:04 crc kubenswrapper[4892]: I0122 09:29:04.067914 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35","Type":"ContainerStarted","Data":"0ee3751e8767c6a8621aab93b98db634479fbc269c700bb7fcb9b8c4cdfd1efa"} Jan 22 09:29:04 crc kubenswrapper[4892]: I0122 09:29:04.185975 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 09:29:05 crc kubenswrapper[4892]: I0122 09:29:05.080016 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"808833e1-7e58-4b7e-a1bb-ff5cc72b5b35","Type":"ContainerStarted","Data":"a9cf042c016472cc4e6032348c966ecaa7a8c5c04824fc7c70156a0f9d3a42d6"} Jan 22 09:29:05 crc kubenswrapper[4892]: I0122 09:29:05.115720 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.115702634 podStartE2EDuration="3.115702634s" podCreationTimestamp="2026-01-22 09:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:05.110634541 +0000 UTC m=+1114.954713604" watchObservedRunningTime="2026-01-22 09:29:05.115702634 +0000 UTC m=+1114.959781697" Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.428983 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.590389 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.592160 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8cff6669d-x8cnv" Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.644253 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59ddd484c6-7p5xf" Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.709394 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fbbbbb748-pbjzb"] Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.709605 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fbbbbb748-pbjzb" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api-log" containerID="cri-o://38190e579c44ae7b68c5d69d0962ee46006456495ac5d09a2b0fd1e7ff4ed696" gracePeriod=30 Jan 22 09:29:06 crc kubenswrapper[4892]: I0122 09:29:06.709968 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fbbbbb748-pbjzb" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api" containerID="cri-o://92f4deaf1e97c6c8f682cb1b378f077c2d766b5250783bf688134b8e64703a5d" gracePeriod=30 Jan 22 09:29:07 crc kubenswrapper[4892]: I0122 09:29:07.105437 4892 generic.go:334] "Generic (PLEG): container finished" podID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerID="38190e579c44ae7b68c5d69d0962ee46006456495ac5d09a2b0fd1e7ff4ed696" exitCode=143 Jan 22 09:29:07 crc kubenswrapper[4892]: I0122 09:29:07.106499 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbbbbb748-pbjzb" event={"ID":"5766f7c7-fa97-4fe0-af76-e06667a8079b","Type":"ContainerDied","Data":"38190e579c44ae7b68c5d69d0962ee46006456495ac5d09a2b0fd1e7ff4ed696"} Jan 22 09:29:07 crc kubenswrapper[4892]: I0122 09:29:07.265786 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7647f5f4ff-hmkw9" Jan 22 09:29:07 crc kubenswrapper[4892]: I0122 09:29:07.467021 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 09:29:08 crc kubenswrapper[4892]: I0122 09:29:08.296582 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 09:29:09 crc kubenswrapper[4892]: I0122 09:29:09.617009 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:29:09 crc kubenswrapper[4892]: I0122 09:29:09.894013 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fbbbbb748-pbjzb" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:50842->10.217.0.164:9311: read: connection reset by peer" Jan 22 09:29:09 crc kubenswrapper[4892]: I0122 09:29:09.894121 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fbbbbb748-pbjzb" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:50838->10.217.0.164:9311: read: connection reset by peer" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.145844 4892 generic.go:334] "Generic (PLEG): container finished" podID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerID="92f4deaf1e97c6c8f682cb1b378f077c2d766b5250783bf688134b8e64703a5d" exitCode=0 Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.146240 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbbbbb748-pbjzb" event={"ID":"5766f7c7-fa97-4fe0-af76-e06667a8079b","Type":"ContainerDied","Data":"92f4deaf1e97c6c8f682cb1b378f077c2d766b5250783bf688134b8e64703a5d"} Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.249516 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.250588 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.252470 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.253146 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-znxfz" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.254363 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.263301 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.327821 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.399764 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.399840 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config-secret\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.399900 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.399927 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzwf\" (UniqueName: \"kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.488349 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:10 crc kubenswrapper[4892]: E0122 09:29:10.489156 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-wjzwf openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="3cfe8fd4-14aa-4549-83da-b3e19538efce" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.495050 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.512042 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-combined-ca-bundle\") pod \"5766f7c7-fa97-4fe0-af76-e06667a8079b\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.512095 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64mxk\" (UniqueName: \"kubernetes.io/projected/5766f7c7-fa97-4fe0-af76-e06667a8079b-kube-api-access-64mxk\") pod \"5766f7c7-fa97-4fe0-af76-e06667a8079b\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.512139 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data-custom\") pod \"5766f7c7-fa97-4fe0-af76-e06667a8079b\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.512551 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5766f7c7-fa97-4fe0-af76-e06667a8079b-logs\") pod \"5766f7c7-fa97-4fe0-af76-e06667a8079b\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.512608 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data\") pod \"5766f7c7-fa97-4fe0-af76-e06667a8079b\" (UID: \"5766f7c7-fa97-4fe0-af76-e06667a8079b\") " Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.512980 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.513783 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.513508 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config-secret\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.513982 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.514327 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzwf\" (UniqueName: \"kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.514789 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5766f7c7-fa97-4fe0-af76-e06667a8079b-logs" (OuterVolumeSpecName: "logs") pod "5766f7c7-fa97-4fe0-af76-e06667a8079b" (UID: "5766f7c7-fa97-4fe0-af76-e06667a8079b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:10 crc kubenswrapper[4892]: E0122 09:29:10.515688 4892 projected.go:194] Error preparing data for projected volume kube-api-access-wjzwf for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 22 09:29:10 crc kubenswrapper[4892]: E0122 09:29:10.515738 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf podName:3cfe8fd4-14aa-4549-83da-b3e19538efce nodeName:}" failed. No retries permitted until 2026-01-22 09:29:11.015724537 +0000 UTC m=+1120.859803600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wjzwf" (UniqueName: "kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf") pod "openstackclient" (UID: "3cfe8fd4-14aa-4549-83da-b3e19538efce") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.521362 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5766f7c7-fa97-4fe0-af76-e06667a8079b-kube-api-access-64mxk" (OuterVolumeSpecName: "kube-api-access-64mxk") pod "5766f7c7-fa97-4fe0-af76-e06667a8079b" (UID: "5766f7c7-fa97-4fe0-af76-e06667a8079b"). InnerVolumeSpecName "kube-api-access-64mxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.522855 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config-secret\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.527607 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5766f7c7-fa97-4fe0-af76-e06667a8079b" (UID: "5766f7c7-fa97-4fe0-af76-e06667a8079b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.527648 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.578170 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5766f7c7-fa97-4fe0-af76-e06667a8079b" (UID: "5766f7c7-fa97-4fe0-af76-e06667a8079b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.585899 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:10 crc kubenswrapper[4892]: E0122 09:29:10.586429 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api-log" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.586453 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api-log" Jan 22 09:29:10 crc kubenswrapper[4892]: E0122 09:29:10.586468 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.586476 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.586692 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.586718 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" containerName="barbican-api-log" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.587429 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.593403 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.597461 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data" (OuterVolumeSpecName: "config-data") pod "5766f7c7-fa97-4fe0-af76-e06667a8079b" (UID: "5766f7c7-fa97-4fe0-af76-e06667a8079b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.628750 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-openstack-config\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.628918 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.628996 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-openstack-config-secret\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.629039 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfjr\" (UniqueName: \"kubernetes.io/projected/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-kube-api-access-lhfjr\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.629189 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.629204 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64mxk\" (UniqueName: \"kubernetes.io/projected/5766f7c7-fa97-4fe0-af76-e06667a8079b-kube-api-access-64mxk\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.629219 4892 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.630384 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5766f7c7-fa97-4fe0-af76-e06667a8079b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.630412 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5766f7c7-fa97-4fe0-af76-e06667a8079b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.732345 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.732430 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-openstack-config-secret\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.732466 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhfjr\" (UniqueName: \"kubernetes.io/projected/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-kube-api-access-lhfjr\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.732544 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-openstack-config\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.733316 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-openstack-config\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.735905 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.736212 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-openstack-config-secret\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.750941 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhfjr\" (UniqueName: \"kubernetes.io/projected/d9a4d1e6-4981-477c-b2cf-8a132de2c1d9-kube-api-access-lhfjr\") pod \"openstackclient\" (UID: \"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9\") " pod="openstack/openstackclient" Jan 22 09:29:10 crc kubenswrapper[4892]: I0122 09:29:10.946455 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.035546 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzwf\" (UniqueName: \"kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf\") pod \"openstackclient\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " pod="openstack/openstackclient" Jan 22 09:29:11 crc kubenswrapper[4892]: E0122 09:29:11.037908 4892 projected.go:194] Error preparing data for projected volume kube-api-access-wjzwf for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (3cfe8fd4-14aa-4549-83da-b3e19538efce) does not match the UID in record. The object might have been deleted and then recreated Jan 22 09:29:11 crc kubenswrapper[4892]: E0122 09:29:11.037951 4892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf podName:3cfe8fd4-14aa-4549-83da-b3e19538efce nodeName:}" failed. No retries permitted until 2026-01-22 09:29:12.037937462 +0000 UTC m=+1121.882016525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wjzwf" (UniqueName: "kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf") pod "openstackclient" (UID: "3cfe8fd4-14aa-4549-83da-b3e19538efce") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (3cfe8fd4-14aa-4549-83da-b3e19538efce) does not match the UID in record. The object might have been deleted and then recreated Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.159969 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.160013 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fbbbbb748-pbjzb" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.160055 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fbbbbb748-pbjzb" event={"ID":"5766f7c7-fa97-4fe0-af76-e06667a8079b","Type":"ContainerDied","Data":"a43cd0a0f2cf75182e26a41a362d5d1f20621556a03b3b7304304a17a0b5f4da"} Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.160093 4892 scope.go:117] "RemoveContainer" containerID="92f4deaf1e97c6c8f682cb1b378f077c2d766b5250783bf688134b8e64703a5d" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.163136 4892 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="3cfe8fd4-14aa-4549-83da-b3e19538efce" podUID="d9a4d1e6-4981-477c-b2cf-8a132de2c1d9" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.172748 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.189462 4892 scope.go:117] "RemoveContainer" containerID="38190e579c44ae7b68c5d69d0962ee46006456495ac5d09a2b0fd1e7ff4ed696" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.239593 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config\") pod \"3cfe8fd4-14aa-4549-83da-b3e19538efce\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.239653 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-combined-ca-bundle\") pod \"3cfe8fd4-14aa-4549-83da-b3e19538efce\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.239739 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config-secret\") pod \"3cfe8fd4-14aa-4549-83da-b3e19538efce\" (UID: \"3cfe8fd4-14aa-4549-83da-b3e19538efce\") " Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.240031 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjzwf\" (UniqueName: \"kubernetes.io/projected/3cfe8fd4-14aa-4549-83da-b3e19538efce-kube-api-access-wjzwf\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.242218 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3cfe8fd4-14aa-4549-83da-b3e19538efce" (UID: "3cfe8fd4-14aa-4549-83da-b3e19538efce"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.244436 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fbbbbb748-pbjzb"] Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.252887 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7fbbbbb748-pbjzb"] Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.257414 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3cfe8fd4-14aa-4549-83da-b3e19538efce" (UID: "3cfe8fd4-14aa-4549-83da-b3e19538efce"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.257512 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cfe8fd4-14aa-4549-83da-b3e19538efce" (UID: "3cfe8fd4-14aa-4549-83da-b3e19538efce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.341221 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.341272 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.341350 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cfe8fd4-14aa-4549-83da-b3e19538efce-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.407137 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.432030 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfe8fd4-14aa-4549-83da-b3e19538efce" path="/var/lib/kubelet/pods/3cfe8fd4-14aa-4549-83da-b3e19538efce/volumes" Jan 22 09:29:11 crc kubenswrapper[4892]: I0122 09:29:11.432547 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5766f7c7-fa97-4fe0-af76-e06667a8079b" path="/var/lib/kubelet/pods/5766f7c7-fa97-4fe0-af76-e06667a8079b/volumes" Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.070250 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b7dcc6b6f-vkw7t" Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.145000 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54fd5b85c6-qxq5r"] Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.145233 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54fd5b85c6-qxq5r" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-api" containerID="cri-o://38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0" gracePeriod=30 Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.145637 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54fd5b85c6-qxq5r" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-httpd" containerID="cri-o://867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243" gracePeriod=30 Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.180171 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9","Type":"ContainerStarted","Data":"43eebd7ef293a74364e226738d49302c09c554a2c82a9587d4023ad995d84b84"} Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.188076 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.198777 4892 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="3cfe8fd4-14aa-4549-83da-b3e19538efce" podUID="d9a4d1e6-4981-477c-b2cf-8a132de2c1d9" Jan 22 09:29:12 crc kubenswrapper[4892]: I0122 09:29:12.678747 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 09:29:13 crc kubenswrapper[4892]: I0122 09:29:13.199873 4892 generic.go:334] "Generic (PLEG): container finished" podID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerID="867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243" exitCode=0 Jan 22 09:29:13 crc kubenswrapper[4892]: I0122 09:29:13.199913 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fd5b85c6-qxq5r" event={"ID":"f3109c9e-e308-4022-8b5e-4b8b61319c24","Type":"ContainerDied","Data":"867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243"} Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.185709 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.421240 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.421494 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-central-agent" containerID="cri-o://779ed22187101d315e9196eabd0c32bf707f10f7a60f267f799195fd44b36e3c" gracePeriod=30 Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.422191 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="proxy-httpd" containerID="cri-o://fee122524d788c74233f0530d6672b2c001393b745996bd9552612a9de378a92" gracePeriod=30 Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.422259 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="sg-core" containerID="cri-o://b3bec6e9894c7cd6ad1ce0be1603747f327c8c4e715957d82111fcafaa304e32" gracePeriod=30 Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.422320 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-notification-agent" containerID="cri-o://0f416c07896322b43443694c5c94a4a32ecdc0e4d223f3ec89e702468fd1fe8f" gracePeriod=30 Jan 22 09:29:14 crc kubenswrapper[4892]: I0122 09:29:14.432643 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": EOF" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.153050 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.216636 4892 generic.go:334] "Generic (PLEG): container finished" podID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerID="38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0" exitCode=0 Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.216686 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fd5b85c6-qxq5r" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.216693 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fd5b85c6-qxq5r" event={"ID":"f3109c9e-e308-4022-8b5e-4b8b61319c24","Type":"ContainerDied","Data":"38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0"} Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.216815 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fd5b85c6-qxq5r" event={"ID":"f3109c9e-e308-4022-8b5e-4b8b61319c24","Type":"ContainerDied","Data":"2f37ec37b00a58ded59f8bb517b3c2e345029424815d59a14e4bd62a544d086f"} Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.216836 4892 scope.go:117] "RemoveContainer" containerID="867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.219565 4892 generic.go:334] "Generic (PLEG): container finished" podID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerID="fee122524d788c74233f0530d6672b2c001393b745996bd9552612a9de378a92" exitCode=0 Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.219592 4892 generic.go:334] "Generic (PLEG): container finished" podID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerID="b3bec6e9894c7cd6ad1ce0be1603747f327c8c4e715957d82111fcafaa304e32" exitCode=2 Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.219600 4892 generic.go:334] "Generic (PLEG): container finished" podID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerID="779ed22187101d315e9196eabd0c32bf707f10f7a60f267f799195fd44b36e3c" exitCode=0 Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.219622 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerDied","Data":"fee122524d788c74233f0530d6672b2c001393b745996bd9552612a9de378a92"} Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.219647 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerDied","Data":"b3bec6e9894c7cd6ad1ce0be1603747f327c8c4e715957d82111fcafaa304e32"} Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.219663 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerDied","Data":"779ed22187101d315e9196eabd0c32bf707f10f7a60f267f799195fd44b36e3c"} Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.237643 4892 scope.go:117] "RemoveContainer" containerID="38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.255695 4892 scope.go:117] "RemoveContainer" containerID="867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243" Jan 22 09:29:15 crc kubenswrapper[4892]: E0122 09:29:15.256154 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243\": container with ID starting with 867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243 not found: ID does not exist" containerID="867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.256200 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243"} err="failed to get container status \"867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243\": rpc error: code = NotFound desc = could not find container \"867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243\": container with ID starting with 867b7721afacf710e82112037e4927efef3ba73c17c7aa5bb8cc9d443b188243 not found: ID does not exist" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.256226 4892 scope.go:117] "RemoveContainer" containerID="38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0" Jan 22 09:29:15 crc kubenswrapper[4892]: E0122 09:29:15.256787 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0\": container with ID starting with 38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0 not found: ID does not exist" containerID="38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.256817 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0"} err="failed to get container status \"38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0\": rpc error: code = NotFound desc = could not find container \"38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0\": container with ID starting with 38cff84053446e01eeaed5cc30661cff4e7386deec1a1445e6d4ccbdad920ac0 not found: ID does not exist" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.339404 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-combined-ca-bundle\") pod \"f3109c9e-e308-4022-8b5e-4b8b61319c24\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.339466 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv4gp\" (UniqueName: \"kubernetes.io/projected/f3109c9e-e308-4022-8b5e-4b8b61319c24-kube-api-access-qv4gp\") pod \"f3109c9e-e308-4022-8b5e-4b8b61319c24\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.339508 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-config\") pod \"f3109c9e-e308-4022-8b5e-4b8b61319c24\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.339538 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-ovndb-tls-certs\") pod \"f3109c9e-e308-4022-8b5e-4b8b61319c24\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.339708 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-httpd-config\") pod \"f3109c9e-e308-4022-8b5e-4b8b61319c24\" (UID: \"f3109c9e-e308-4022-8b5e-4b8b61319c24\") " Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.345718 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f3109c9e-e308-4022-8b5e-4b8b61319c24" (UID: "f3109c9e-e308-4022-8b5e-4b8b61319c24"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.350427 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3109c9e-e308-4022-8b5e-4b8b61319c24-kube-api-access-qv4gp" (OuterVolumeSpecName: "kube-api-access-qv4gp") pod "f3109c9e-e308-4022-8b5e-4b8b61319c24" (UID: "f3109c9e-e308-4022-8b5e-4b8b61319c24"). InnerVolumeSpecName "kube-api-access-qv4gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.391424 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3109c9e-e308-4022-8b5e-4b8b61319c24" (UID: "f3109c9e-e308-4022-8b5e-4b8b61319c24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.400392 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-config" (OuterVolumeSpecName: "config") pod "f3109c9e-e308-4022-8b5e-4b8b61319c24" (UID: "f3109c9e-e308-4022-8b5e-4b8b61319c24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.419439 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f3109c9e-e308-4022-8b5e-4b8b61319c24" (UID: "f3109c9e-e308-4022-8b5e-4b8b61319c24"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.443800 4892 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.443846 4892 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.443859 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.443871 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv4gp\" (UniqueName: \"kubernetes.io/projected/f3109c9e-e308-4022-8b5e-4b8b61319c24-kube-api-access-qv4gp\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.443885 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3109c9e-e308-4022-8b5e-4b8b61319c24-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.536533 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54fd5b85c6-qxq5r"] Jan 22 09:29:15 crc kubenswrapper[4892]: I0122 09:29:15.557795 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54fd5b85c6-qxq5r"] Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.752421 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-674547b56f-gvjxm"] Jan 22 09:29:16 crc kubenswrapper[4892]: E0122 09:29:16.753033 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-api" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.753045 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-api" Jan 22 09:29:16 crc kubenswrapper[4892]: E0122 09:29:16.753064 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-httpd" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.753070 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-httpd" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.753248 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-api" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.753265 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" containerName="neutron-httpd" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.754136 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.756592 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.756821 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.756845 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774668 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdf866-14d0-4308-a8d7-c598fde46122-log-httpd\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774737 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdf866-14d0-4308-a8d7-c598fde46122-run-httpd\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774801 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-config-data\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774821 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-public-tls-certs\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774858 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/accdf866-14d0-4308-a8d7-c598fde46122-etc-swift\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774878 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5b9\" (UniqueName: \"kubernetes.io/projected/accdf866-14d0-4308-a8d7-c598fde46122-kube-api-access-7h5b9\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774903 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-combined-ca-bundle\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.774940 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-internal-tls-certs\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.786118 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-674547b56f-gvjxm"] Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.876788 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-config-data\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.876851 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-public-tls-certs\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.876880 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/accdf866-14d0-4308-a8d7-c598fde46122-etc-swift\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.876924 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5b9\" (UniqueName: \"kubernetes.io/projected/accdf866-14d0-4308-a8d7-c598fde46122-kube-api-access-7h5b9\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.876945 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-combined-ca-bundle\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.876993 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-internal-tls-certs\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.877049 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdf866-14d0-4308-a8d7-c598fde46122-log-httpd\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.877103 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdf866-14d0-4308-a8d7-c598fde46122-run-httpd\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.877635 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdf866-14d0-4308-a8d7-c598fde46122-run-httpd\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.878121 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/accdf866-14d0-4308-a8d7-c598fde46122-log-httpd\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.881976 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-combined-ca-bundle\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.882952 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/accdf866-14d0-4308-a8d7-c598fde46122-etc-swift\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.883583 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-internal-tls-certs\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.894117 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-config-data\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.895689 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/accdf866-14d0-4308-a8d7-c598fde46122-public-tls-certs\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:16 crc kubenswrapper[4892]: I0122 09:29:16.920600 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5b9\" (UniqueName: \"kubernetes.io/projected/accdf866-14d0-4308-a8d7-c598fde46122-kube-api-access-7h5b9\") pod \"swift-proxy-674547b56f-gvjxm\" (UID: \"accdf866-14d0-4308-a8d7-c598fde46122\") " pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:17 crc kubenswrapper[4892]: I0122 09:29:17.084371 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:17 crc kubenswrapper[4892]: I0122 09:29:17.430477 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3109c9e-e308-4022-8b5e-4b8b61319c24" path="/var/lib/kubelet/pods/f3109c9e-e308-4022-8b5e-4b8b61319c24/volumes" Jan 22 09:29:18 crc kubenswrapper[4892]: I0122 09:29:18.249645 4892 generic.go:334] "Generic (PLEG): container finished" podID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerID="0f416c07896322b43443694c5c94a4a32ecdc0e4d223f3ec89e702468fd1fe8f" exitCode=0 Jan 22 09:29:18 crc kubenswrapper[4892]: I0122 09:29:18.249737 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerDied","Data":"0f416c07896322b43443694c5c94a4a32ecdc0e4d223f3ec89e702468fd1fe8f"} Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.480489 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580320 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-config-data\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580357 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-combined-ca-bundle\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580395 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-run-httpd\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580453 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-scripts\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580477 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-sg-core-conf-yaml\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580563 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7h6d\" (UniqueName: \"kubernetes.io/projected/64acec33-a0ea-4b7f-b7dd-ae704b047a95-kube-api-access-m7h6d\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.580660 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-log-httpd\") pod \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\" (UID: \"64acec33-a0ea-4b7f-b7dd-ae704b047a95\") " Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.581207 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.581766 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.590906 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-scripts" (OuterVolumeSpecName: "scripts") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.592476 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64acec33-a0ea-4b7f-b7dd-ae704b047a95-kube-api-access-m7h6d" (OuterVolumeSpecName: "kube-api-access-m7h6d") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "kube-api-access-m7h6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.608722 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.649699 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.683414 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7h6d\" (UniqueName: \"kubernetes.io/projected/64acec33-a0ea-4b7f-b7dd-ae704b047a95-kube-api-access-m7h6d\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.683445 4892 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.683454 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.683462 4892 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64acec33-a0ea-4b7f-b7dd-ae704b047a95-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.683470 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.683479 4892 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.685422 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-config-data" (OuterVolumeSpecName: "config-data") pod "64acec33-a0ea-4b7f-b7dd-ae704b047a95" (UID: "64acec33-a0ea-4b7f-b7dd-ae704b047a95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.710391 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-674547b56f-gvjxm"] Jan 22 09:29:21 crc kubenswrapper[4892]: W0122 09:29:21.710963 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaccdf866_14d0_4308_a8d7_c598fde46122.slice/crio-ac90ff1d912dde937e0db717403fe72506648090e3c367ac8b6856c923fd509c WatchSource:0}: Error finding container ac90ff1d912dde937e0db717403fe72506648090e3c367ac8b6856c923fd509c: Status 404 returned error can't find the container with id ac90ff1d912dde937e0db717403fe72506648090e3c367ac8b6856c923fd509c Jan 22 09:29:21 crc kubenswrapper[4892]: I0122 09:29:21.784918 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64acec33-a0ea-4b7f-b7dd-ae704b047a95-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.287814 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64acec33-a0ea-4b7f-b7dd-ae704b047a95","Type":"ContainerDied","Data":"b0bc76740f92ec6b530e02a616b989e6d54a84d037af5f704cb931e4d9230009"} Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.288139 4892 scope.go:117] "RemoveContainer" containerID="fee122524d788c74233f0530d6672b2c001393b745996bd9552612a9de378a92" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.288025 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.292906 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-674547b56f-gvjxm" event={"ID":"accdf866-14d0-4308-a8d7-c598fde46122","Type":"ContainerStarted","Data":"c38f611b2aa4c67207252a760940ebefc59b6c9ed2ab8426fa1d239597aaff53"} Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.292949 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-674547b56f-gvjxm" event={"ID":"accdf866-14d0-4308-a8d7-c598fde46122","Type":"ContainerStarted","Data":"bb4bd3119fb35c0d51e5d1097b02da844f2dd458d19328f7ca30ae002da4d5ac"} Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.292960 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-674547b56f-gvjxm" event={"ID":"accdf866-14d0-4308-a8d7-c598fde46122","Type":"ContainerStarted","Data":"ac90ff1d912dde937e0db717403fe72506648090e3c367ac8b6856c923fd509c"} Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.293704 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.294433 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.295733 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d9a4d1e6-4981-477c-b2cf-8a132de2c1d9","Type":"ContainerStarted","Data":"f9613a62420d91c1b75ddd517147516cf6ee98c741fb2c977b240d14203f3079"} Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.322427 4892 scope.go:117] "RemoveContainer" containerID="b3bec6e9894c7cd6ad1ce0be1603747f327c8c4e715957d82111fcafaa304e32" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.327719 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.327986 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-log" containerID="cri-o://59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5" gracePeriod=30 Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.328156 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-httpd" containerID="cri-o://1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f" gracePeriod=30 Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.339297 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-674547b56f-gvjxm" podStartSLOduration=6.339258806 podStartE2EDuration="6.339258806s" podCreationTimestamp="2026-01-22 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:22.325843201 +0000 UTC m=+1132.169922264" watchObservedRunningTime="2026-01-22 09:29:22.339258806 +0000 UTC m=+1132.183337869" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.342203 4892 scope.go:117] "RemoveContainer" containerID="0f416c07896322b43443694c5c94a4a32ecdc0e4d223f3ec89e702468fd1fe8f" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.355885 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.371056 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.384761 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:22 crc kubenswrapper[4892]: E0122 09:29:22.385216 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-notification-agent" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385238 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-notification-agent" Jan 22 09:29:22 crc kubenswrapper[4892]: E0122 09:29:22.385264 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="proxy-httpd" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385275 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="proxy-httpd" Jan 22 09:29:22 crc kubenswrapper[4892]: E0122 09:29:22.385300 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-central-agent" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385308 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-central-agent" Jan 22 09:29:22 crc kubenswrapper[4892]: E0122 09:29:22.385339 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="sg-core" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385348 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="sg-core" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385550 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="proxy-httpd" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385568 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-notification-agent" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385580 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="ceilometer-central-agent" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.385594 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" containerName="sg-core" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.386562 4892 scope.go:117] "RemoveContainer" containerID="779ed22187101d315e9196eabd0c32bf707f10f7a60f267f799195fd44b36e3c" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.387611 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.390448 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.390463 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.588673784 podStartE2EDuration="12.390452109s" podCreationTimestamp="2026-01-22 09:29:10 +0000 UTC" firstStartedPulling="2026-01-22 09:29:11.40736748 +0000 UTC m=+1121.251446543" lastFinishedPulling="2026-01-22 09:29:21.209145805 +0000 UTC m=+1131.053224868" observedRunningTime="2026-01-22 09:29:22.365251277 +0000 UTC m=+1132.209330340" watchObservedRunningTime="2026-01-22 09:29:22.390452109 +0000 UTC m=+1132.234531172" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.390545 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.406150 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.497521 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wttjg\" (UniqueName: \"kubernetes.io/projected/e806d349-4c1e-4dc9-836a-ece0da878110-kube-api-access-wttjg\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.497610 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-config-data\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.497649 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-run-httpd\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.497701 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.497898 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.497952 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-scripts\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.498023 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-log-httpd\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599371 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-config-data\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599696 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-run-httpd\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599741 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599802 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599821 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-scripts\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599850 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-log-httpd\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.599886 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wttjg\" (UniqueName: \"kubernetes.io/projected/e806d349-4c1e-4dc9-836a-ece0da878110-kube-api-access-wttjg\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.600321 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-run-httpd\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.600435 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-log-httpd\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.606359 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-config-data\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.606846 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.618083 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-scripts\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.618561 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.621235 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wttjg\" (UniqueName: \"kubernetes.io/projected/e806d349-4c1e-4dc9-836a-ece0da878110-kube-api-access-wttjg\") pod \"ceilometer-0\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.704774 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:22 crc kubenswrapper[4892]: I0122 09:29:22.797946 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.089915 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.090460 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-log" containerID="cri-o://c0f099de934d3e1136d9e241dc36c5aca540c063e025dfd9576ea837ebfdedea" gracePeriod=30 Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.090617 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-httpd" containerID="cri-o://a3ac5d889501d793e854a738c29613cf750642c47f17f3b68816dd1170ca9657" gracePeriod=30 Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.178517 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:23 crc kubenswrapper[4892]: W0122 09:29:23.189312 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode806d349_4c1e_4dc9_836a_ece0da878110.slice/crio-b1186d8935a699cc500289b8dee01237c9bf9f04c0e9b339fc6331b1ad63e653 WatchSource:0}: Error finding container b1186d8935a699cc500289b8dee01237c9bf9f04c0e9b339fc6331b1ad63e653: Status 404 returned error can't find the container with id b1186d8935a699cc500289b8dee01237c9bf9f04c0e9b339fc6331b1ad63e653 Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.306259 4892 generic.go:334] "Generic (PLEG): container finished" podID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerID="c0f099de934d3e1136d9e241dc36c5aca540c063e025dfd9576ea837ebfdedea" exitCode=143 Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.306318 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce40113d-7ce5-4cff-b5e4-6d84102a6af6","Type":"ContainerDied","Data":"c0f099de934d3e1136d9e241dc36c5aca540c063e025dfd9576ea837ebfdedea"} Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.308932 4892 generic.go:334] "Generic (PLEG): container finished" podID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerID="59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5" exitCode=143 Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.309007 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ac9e125-77fe-4415-b124-bdd6816b313d","Type":"ContainerDied","Data":"59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5"} Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.310243 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerStarted","Data":"b1186d8935a699cc500289b8dee01237c9bf9f04c0e9b339fc6331b1ad63e653"} Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.427877 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64acec33-a0ea-4b7f-b7dd-ae704b047a95" path="/var/lib/kubelet/pods/64acec33-a0ea-4b7f-b7dd-ae704b047a95/volumes" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.751610 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pxmk9"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.752813 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.763478 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pxmk9"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.836997 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w9xkt"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.838756 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.842070 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbe302-2a50-49e2-874a-b3bfe80bb483-operator-scripts\") pod \"nova-api-db-create-pxmk9\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.842200 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6m9p\" (UniqueName: \"kubernetes.io/projected/35dbe302-2a50-49e2-874a-b3bfe80bb483-kube-api-access-c6m9p\") pod \"nova-api-db-create-pxmk9\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.854432 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3d72-account-create-update-phg7j"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.855872 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.858079 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.873338 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w9xkt"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.889702 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3d72-account-create-update-phg7j"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.943989 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpzn\" (UniqueName: \"kubernetes.io/projected/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-kube-api-access-fwpzn\") pod \"nova-api-3d72-account-create-update-phg7j\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.944327 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbe302-2a50-49e2-874a-b3bfe80bb483-operator-scripts\") pod \"nova-api-db-create-pxmk9\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.944376 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58bdf833-f1d4-4c50-9710-0453e093b082-operator-scripts\") pod \"nova-cell0-db-create-w9xkt\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.944429 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-operator-scripts\") pod \"nova-api-3d72-account-create-update-phg7j\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.944451 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6m9p\" (UniqueName: \"kubernetes.io/projected/35dbe302-2a50-49e2-874a-b3bfe80bb483-kube-api-access-c6m9p\") pod \"nova-api-db-create-pxmk9\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.944480 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4qf\" (UniqueName: \"kubernetes.io/projected/58bdf833-f1d4-4c50-9710-0453e093b082-kube-api-access-wp4qf\") pod \"nova-cell0-db-create-w9xkt\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.946557 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbe302-2a50-49e2-874a-b3bfe80bb483-operator-scripts\") pod \"nova-api-db-create-pxmk9\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.948351 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ltg2c"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.949484 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.959909 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ltg2c"] Jan 22 09:29:23 crc kubenswrapper[4892]: I0122 09:29:23.974135 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6m9p\" (UniqueName: \"kubernetes.io/projected/35dbe302-2a50-49e2-874a-b3bfe80bb483-kube-api-access-c6m9p\") pod \"nova-api-db-create-pxmk9\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.046454 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294650da-35b8-402b-86ca-722359f803f8-operator-scripts\") pod \"nova-cell1-db-create-ltg2c\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.046522 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58bdf833-f1d4-4c50-9710-0453e093b082-operator-scripts\") pod \"nova-cell0-db-create-w9xkt\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.046676 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-operator-scripts\") pod \"nova-api-3d72-account-create-update-phg7j\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.046773 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp4qf\" (UniqueName: \"kubernetes.io/projected/58bdf833-f1d4-4c50-9710-0453e093b082-kube-api-access-wp4qf\") pod \"nova-cell0-db-create-w9xkt\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.046908 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx98f\" (UniqueName: \"kubernetes.io/projected/294650da-35b8-402b-86ca-722359f803f8-kube-api-access-bx98f\") pod \"nova-cell1-db-create-ltg2c\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.047156 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpzn\" (UniqueName: \"kubernetes.io/projected/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-kube-api-access-fwpzn\") pod \"nova-api-3d72-account-create-update-phg7j\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.047428 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58bdf833-f1d4-4c50-9710-0453e093b082-operator-scripts\") pod \"nova-cell0-db-create-w9xkt\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.047588 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-operator-scripts\") pod \"nova-api-3d72-account-create-update-phg7j\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.056669 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f45c-account-create-update-j2b4k"] Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.057796 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.060309 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.069377 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpzn\" (UniqueName: \"kubernetes.io/projected/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-kube-api-access-fwpzn\") pod \"nova-api-3d72-account-create-update-phg7j\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.071172 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp4qf\" (UniqueName: \"kubernetes.io/projected/58bdf833-f1d4-4c50-9710-0453e093b082-kube-api-access-wp4qf\") pod \"nova-cell0-db-create-w9xkt\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.071495 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f45c-account-create-update-j2b4k"] Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.083562 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.148313 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962154cf-811c-41cc-a43a-e63af5139dc8-operator-scripts\") pod \"nova-cell0-f45c-account-create-update-j2b4k\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.148362 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hccgm\" (UniqueName: \"kubernetes.io/projected/962154cf-811c-41cc-a43a-e63af5139dc8-kube-api-access-hccgm\") pod \"nova-cell0-f45c-account-create-update-j2b4k\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.148407 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx98f\" (UniqueName: \"kubernetes.io/projected/294650da-35b8-402b-86ca-722359f803f8-kube-api-access-bx98f\") pod \"nova-cell1-db-create-ltg2c\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.148501 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294650da-35b8-402b-86ca-722359f803f8-operator-scripts\") pod \"nova-cell1-db-create-ltg2c\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.149561 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294650da-35b8-402b-86ca-722359f803f8-operator-scripts\") pod \"nova-cell1-db-create-ltg2c\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.161324 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.166224 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6c9a-account-create-update-8ldll"] Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.169563 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.174373 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.178220 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.178951 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx98f\" (UniqueName: \"kubernetes.io/projected/294650da-35b8-402b-86ca-722359f803f8-kube-api-access-bx98f\") pod \"nova-cell1-db-create-ltg2c\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.186944 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79777d5484-zk25q" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.187156 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6c9a-account-create-update-8ldll"] Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.187554 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.250353 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08dee6e-948e-480a-8642-ee350e0a05f1-operator-scripts\") pod \"nova-cell1-6c9a-account-create-update-8ldll\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.250712 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hztb\" (UniqueName: \"kubernetes.io/projected/b08dee6e-948e-480a-8642-ee350e0a05f1-kube-api-access-7hztb\") pod \"nova-cell1-6c9a-account-create-update-8ldll\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.250752 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962154cf-811c-41cc-a43a-e63af5139dc8-operator-scripts\") pod \"nova-cell0-f45c-account-create-update-j2b4k\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.250774 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hccgm\" (UniqueName: \"kubernetes.io/projected/962154cf-811c-41cc-a43a-e63af5139dc8-kube-api-access-hccgm\") pod \"nova-cell0-f45c-account-create-update-j2b4k\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.253247 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962154cf-811c-41cc-a43a-e63af5139dc8-operator-scripts\") pod \"nova-cell0-f45c-account-create-update-j2b4k\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.269250 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hccgm\" (UniqueName: \"kubernetes.io/projected/962154cf-811c-41cc-a43a-e63af5139dc8-kube-api-access-hccgm\") pod \"nova-cell0-f45c-account-create-update-j2b4k\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.324243 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.334725 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.352424 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08dee6e-948e-480a-8642-ee350e0a05f1-operator-scripts\") pod \"nova-cell1-6c9a-account-create-update-8ldll\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.352880 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hztb\" (UniqueName: \"kubernetes.io/projected/b08dee6e-948e-480a-8642-ee350e0a05f1-kube-api-access-7hztb\") pod \"nova-cell1-6c9a-account-create-update-8ldll\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.353395 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08dee6e-948e-480a-8642-ee350e0a05f1-operator-scripts\") pod \"nova-cell1-6c9a-account-create-update-8ldll\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.366868 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hztb\" (UniqueName: \"kubernetes.io/projected/b08dee6e-948e-480a-8642-ee350e0a05f1-kube-api-access-7hztb\") pod \"nova-cell1-6c9a-account-create-update-8ldll\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.601032 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pxmk9"] Jan 22 09:29:24 crc kubenswrapper[4892]: W0122 09:29:24.604572 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35dbe302_2a50_49e2_874a_b3bfe80bb483.slice/crio-c32bb28020d807b943160084f5265efc3f4ee2013c18b3d01b75446bf00091eb WatchSource:0}: Error finding container c32bb28020d807b943160084f5265efc3f4ee2013c18b3d01b75446bf00091eb: Status 404 returned error can't find the container with id c32bb28020d807b943160084f5265efc3f4ee2013c18b3d01b75446bf00091eb Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.648614 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.722993 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w9xkt"] Jan 22 09:29:24 crc kubenswrapper[4892]: W0122 09:29:24.734919 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58bdf833_f1d4_4c50_9710_0453e093b082.slice/crio-8fcd5cb3f59fcf5e66db7fff00f397634670b9568736e2825d727b5f6f394693 WatchSource:0}: Error finding container 8fcd5cb3f59fcf5e66db7fff00f397634670b9568736e2825d727b5f6f394693: Status 404 returned error can't find the container with id 8fcd5cb3f59fcf5e66db7fff00f397634670b9568736e2825d727b5f6f394693 Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.852111 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3d72-account-create-update-phg7j"] Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.922638 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ltg2c"] Jan 22 09:29:24 crc kubenswrapper[4892]: W0122 09:29:24.936337 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod294650da_35b8_402b_86ca_722359f803f8.slice/crio-5ce4cea95f10bf9d950cb9cba36a32cbf832af4402c20d59083a72b9439624e7 WatchSource:0}: Error finding container 5ce4cea95f10bf9d950cb9cba36a32cbf832af4402c20d59083a72b9439624e7: Status 404 returned error can't find the container with id 5ce4cea95f10bf9d950cb9cba36a32cbf832af4402c20d59083a72b9439624e7 Jan 22 09:29:24 crc kubenswrapper[4892]: I0122 09:29:24.937438 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f45c-account-create-update-j2b4k"] Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.109766 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6c9a-account-create-update-8ldll"] Jan 22 09:29:25 crc kubenswrapper[4892]: W0122 09:29:25.114979 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08dee6e_948e_480a_8642_ee350e0a05f1.slice/crio-cb06989dcc3cb2180432d0cc9577d9e51d124d688cd37cbb95358245c38f832b WatchSource:0}: Error finding container cb06989dcc3cb2180432d0cc9577d9e51d124d688cd37cbb95358245c38f832b: Status 404 returned error can't find the container with id cb06989dcc3cb2180432d0cc9577d9e51d124d688cd37cbb95358245c38f832b Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.352439 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" event={"ID":"b08dee6e-948e-480a-8642-ee350e0a05f1","Type":"ContainerStarted","Data":"cb06989dcc3cb2180432d0cc9577d9e51d124d688cd37cbb95358245c38f832b"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.353551 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3d72-account-create-update-phg7j" event={"ID":"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb","Type":"ContainerStarted","Data":"3773b362a7a23bf787f49f103b98b8f0498a36edd2ddd34639b7d7dd4069321a"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.353601 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3d72-account-create-update-phg7j" event={"ID":"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb","Type":"ContainerStarted","Data":"71db7fc7c319a89d967518dc660f97f677512a832f82ea5c643c2e16a9ebcf08"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.356313 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerStarted","Data":"44858298fbb514f65a32b615af3c0284132892fe56df411d8b445a419753819d"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.358077 4892 generic.go:334] "Generic (PLEG): container finished" podID="35dbe302-2a50-49e2-874a-b3bfe80bb483" containerID="d1fdb170b377c8ba3490763922a87c3734e0abc7857942e68bf6fd68b5f99015" exitCode=0 Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.358164 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxmk9" event={"ID":"35dbe302-2a50-49e2-874a-b3bfe80bb483","Type":"ContainerDied","Data":"d1fdb170b377c8ba3490763922a87c3734e0abc7857942e68bf6fd68b5f99015"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.358199 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxmk9" event={"ID":"35dbe302-2a50-49e2-874a-b3bfe80bb483","Type":"ContainerStarted","Data":"c32bb28020d807b943160084f5265efc3f4ee2013c18b3d01b75446bf00091eb"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.360056 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ltg2c" event={"ID":"294650da-35b8-402b-86ca-722359f803f8","Type":"ContainerStarted","Data":"7ff7e2459ca349500abce67943fe5cc8aca33d9a9a4aa65f5c8f9d48613cc7d1"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.360096 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ltg2c" event={"ID":"294650da-35b8-402b-86ca-722359f803f8","Type":"ContainerStarted","Data":"5ce4cea95f10bf9d950cb9cba36a32cbf832af4402c20d59083a72b9439624e7"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.361558 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" event={"ID":"962154cf-811c-41cc-a43a-e63af5139dc8","Type":"ContainerStarted","Data":"bb64dd8001bc36afa6de8eab0183e26ee4663489abe299392f350142d6fcb6f8"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.361586 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" event={"ID":"962154cf-811c-41cc-a43a-e63af5139dc8","Type":"ContainerStarted","Data":"6bda0f74e6f43704170ebd6a158ed142bd853844fdcf10dfe2600ceefc7a075d"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.363112 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w9xkt" event={"ID":"58bdf833-f1d4-4c50-9710-0453e093b082","Type":"ContainerStarted","Data":"95a411c8470faa5dd5f02ad37728c0582533d2ccc1cb27325759abc3f7a707d0"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.363139 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w9xkt" event={"ID":"58bdf833-f1d4-4c50-9710-0453e093b082","Type":"ContainerStarted","Data":"8fcd5cb3f59fcf5e66db7fff00f397634670b9568736e2825d727b5f6f394693"} Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.378794 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3d72-account-create-update-phg7j" podStartSLOduration=2.378777944 podStartE2EDuration="2.378777944s" podCreationTimestamp="2026-01-22 09:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:25.373896055 +0000 UTC m=+1135.217975118" watchObservedRunningTime="2026-01-22 09:29:25.378777944 +0000 UTC m=+1135.222857007" Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.390538 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" podStartSLOduration=1.3905222990000001 podStartE2EDuration="1.390522299s" podCreationTimestamp="2026-01-22 09:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:25.384971504 +0000 UTC m=+1135.229050567" watchObservedRunningTime="2026-01-22 09:29:25.390522299 +0000 UTC m=+1135.234601362" Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.410405 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ltg2c" podStartSLOduration=2.410191656 podStartE2EDuration="2.410191656s" podCreationTimestamp="2026-01-22 09:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:25.405521293 +0000 UTC m=+1135.249600366" watchObservedRunningTime="2026-01-22 09:29:25.410191656 +0000 UTC m=+1135.254270719" Jan 22 09:29:25 crc kubenswrapper[4892]: I0122 09:29:25.441048 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-w9xkt" podStartSLOduration=2.441027475 podStartE2EDuration="2.441027475s" podCreationTimestamp="2026-01-22 09:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:25.432539769 +0000 UTC m=+1135.276618832" watchObservedRunningTime="2026-01-22 09:29:25.441027475 +0000 UTC m=+1135.285106528" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.197091 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300070 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-httpd-run\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300145 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-config-data\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300270 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-logs\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300324 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-internal-tls-certs\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300405 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300452 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-scripts\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300478 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs7r4\" (UniqueName: \"kubernetes.io/projected/8ac9e125-77fe-4415-b124-bdd6816b313d-kube-api-access-gs7r4\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.300516 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-combined-ca-bundle\") pod \"8ac9e125-77fe-4415-b124-bdd6816b313d\" (UID: \"8ac9e125-77fe-4415-b124-bdd6816b313d\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.304616 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.304981 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-logs" (OuterVolumeSpecName: "logs") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.307267 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.309838 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac9e125-77fe-4415-b124-bdd6816b313d-kube-api-access-gs7r4" (OuterVolumeSpecName: "kube-api-access-gs7r4") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "kube-api-access-gs7r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.330525 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-scripts" (OuterVolumeSpecName: "scripts") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.339423 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.364857 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-config-data" (OuterVolumeSpecName: "config-data") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.374949 4892 generic.go:334] "Generic (PLEG): container finished" podID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerID="a3ac5d889501d793e854a738c29613cf750642c47f17f3b68816dd1170ca9657" exitCode=0 Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.375043 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce40113d-7ce5-4cff-b5e4-6d84102a6af6","Type":"ContainerDied","Data":"a3ac5d889501d793e854a738c29613cf750642c47f17f3b68816dd1170ca9657"} Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.377213 4892 generic.go:334] "Generic (PLEG): container finished" podID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerID="1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f" exitCode=0 Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.377350 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.378143 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ac9e125-77fe-4415-b124-bdd6816b313d","Type":"ContainerDied","Data":"1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f"} Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.378180 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ac9e125-77fe-4415-b124-bdd6816b313d","Type":"ContainerDied","Data":"31b18a27d1bbfed4fc5ec923aa89ee70ec23d7126b5524823c4595a30e578221"} Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.378200 4892 scope.go:117] "RemoveContainer" containerID="1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.381956 4892 generic.go:334] "Generic (PLEG): container finished" podID="294650da-35b8-402b-86ca-722359f803f8" containerID="7ff7e2459ca349500abce67943fe5cc8aca33d9a9a4aa65f5c8f9d48613cc7d1" exitCode=0 Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.382100 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ltg2c" event={"ID":"294650da-35b8-402b-86ca-722359f803f8","Type":"ContainerDied","Data":"7ff7e2459ca349500abce67943fe5cc8aca33d9a9a4aa65f5c8f9d48613cc7d1"} Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.383620 4892 generic.go:334] "Generic (PLEG): container finished" podID="58bdf833-f1d4-4c50-9710-0453e093b082" containerID="95a411c8470faa5dd5f02ad37728c0582533d2ccc1cb27325759abc3f7a707d0" exitCode=0 Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.383695 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w9xkt" event={"ID":"58bdf833-f1d4-4c50-9710-0453e093b082","Type":"ContainerDied","Data":"95a411c8470faa5dd5f02ad37728c0582533d2ccc1cb27325759abc3f7a707d0"} Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.387972 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" event={"ID":"b08dee6e-948e-480a-8642-ee350e0a05f1","Type":"ContainerStarted","Data":"1bd370dcccf7e6a6d737be0642b7b4c16ecbf0b887c971f8f29a9fb94836a665"} Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403497 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ac9e125-77fe-4415-b124-bdd6816b313d" (UID: "8ac9e125-77fe-4415-b124-bdd6816b313d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403609 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403638 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403662 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403671 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403713 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs7r4\" (UniqueName: \"kubernetes.io/projected/8ac9e125-77fe-4415-b124-bdd6816b313d-kube-api-access-gs7r4\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403734 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.403771 4892 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ac9e125-77fe-4415-b124-bdd6816b313d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.421019 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" podStartSLOduration=2.421000692 podStartE2EDuration="2.421000692s" podCreationTimestamp="2026-01-22 09:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:26.4151672 +0000 UTC m=+1136.259246263" watchObservedRunningTime="2026-01-22 09:29:26.421000692 +0000 UTC m=+1136.265079755" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.434926 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.505732 4892 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ac9e125-77fe-4415-b124-bdd6816b313d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.506246 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.559412 4892 scope.go:117] "RemoveContainer" containerID="59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.579375 4892 scope.go:117] "RemoveContainer" containerID="1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f" Jan 22 09:29:26 crc kubenswrapper[4892]: E0122 09:29:26.579712 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f\": container with ID starting with 1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f not found: ID does not exist" containerID="1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.579758 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f"} err="failed to get container status \"1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f\": rpc error: code = NotFound desc = could not find container \"1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f\": container with ID starting with 1386fdc923c72735bf28abd229c4dbece689581e11d214b9dd509ad6a52e576f not found: ID does not exist" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.579783 4892 scope.go:117] "RemoveContainer" containerID="59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5" Jan 22 09:29:26 crc kubenswrapper[4892]: E0122 09:29:26.580027 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5\": container with ID starting with 59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5 not found: ID does not exist" containerID="59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.580050 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5"} err="failed to get container status \"59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5\": rpc error: code = NotFound desc = could not find container \"59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5\": container with ID starting with 59c06b000a9cbda5fa1e6e5a8841b468d6cad2333bcec4686276681679acd1c5 not found: ID does not exist" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.785924 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.805221 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.814797 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.834245 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:29:26 crc kubenswrapper[4892]: E0122 09:29:26.834739 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35dbe302-2a50-49e2-874a-b3bfe80bb483" containerName="mariadb-database-create" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.834756 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="35dbe302-2a50-49e2-874a-b3bfe80bb483" containerName="mariadb-database-create" Jan 22 09:29:26 crc kubenswrapper[4892]: E0122 09:29:26.834774 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-httpd" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.834782 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-httpd" Jan 22 09:29:26 crc kubenswrapper[4892]: E0122 09:29:26.834811 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-log" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.834820 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-log" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.835035 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="35dbe302-2a50-49e2-874a-b3bfe80bb483" containerName="mariadb-database-create" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.835055 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-httpd" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.835072 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" containerName="glance-log" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.836021 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.844737 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.845658 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.866353 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.912963 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6m9p\" (UniqueName: \"kubernetes.io/projected/35dbe302-2a50-49e2-874a-b3bfe80bb483-kube-api-access-c6m9p\") pod \"35dbe302-2a50-49e2-874a-b3bfe80bb483\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913097 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbe302-2a50-49e2-874a-b3bfe80bb483-operator-scripts\") pod \"35dbe302-2a50-49e2-874a-b3bfe80bb483\" (UID: \"35dbe302-2a50-49e2-874a-b3bfe80bb483\") " Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913542 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913575 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913619 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913662 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d04b37-82ff-4c76-ab88-4602d405c9e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913684 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0d04b37-82ff-4c76-ab88-4602d405c9e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913676 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35dbe302-2a50-49e2-874a-b3bfe80bb483-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35dbe302-2a50-49e2-874a-b3bfe80bb483" (UID: "35dbe302-2a50-49e2-874a-b3bfe80bb483"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913873 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.913902 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgwz\" (UniqueName: \"kubernetes.io/projected/c0d04b37-82ff-4c76-ab88-4602d405c9e0-kube-api-access-fpgwz\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.914039 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.914891 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbe302-2a50-49e2-874a-b3bfe80bb483-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:26 crc kubenswrapper[4892]: I0122 09:29:26.917993 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35dbe302-2a50-49e2-874a-b3bfe80bb483-kube-api-access-c6m9p" (OuterVolumeSpecName: "kube-api-access-c6m9p") pod "35dbe302-2a50-49e2-874a-b3bfe80bb483" (UID: "35dbe302-2a50-49e2-874a-b3bfe80bb483"). InnerVolumeSpecName "kube-api-access-c6m9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016401 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpgwz\" (UniqueName: \"kubernetes.io/projected/c0d04b37-82ff-4c76-ab88-4602d405c9e0-kube-api-access-fpgwz\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016457 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016527 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016628 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016656 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016701 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016738 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d04b37-82ff-4c76-ab88-4602d405c9e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016756 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0d04b37-82ff-4c76-ab88-4602d405c9e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.016804 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6m9p\" (UniqueName: \"kubernetes.io/projected/35dbe302-2a50-49e2-874a-b3bfe80bb483-kube-api-access-c6m9p\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.017237 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0d04b37-82ff-4c76-ab88-4602d405c9e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.017713 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.020888 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d04b37-82ff-4c76-ab88-4602d405c9e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.022265 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.026317 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.026983 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.027783 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d04b37-82ff-4c76-ab88-4602d405c9e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.039825 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpgwz\" (UniqueName: \"kubernetes.io/projected/c0d04b37-82ff-4c76-ab88-4602d405c9e0-kube-api-access-fpgwz\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.050792 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0d04b37-82ff-4c76-ab88-4602d405c9e0\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.092382 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.093880 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-674547b56f-gvjxm" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.158161 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.406176 4892 generic.go:334] "Generic (PLEG): container finished" podID="962154cf-811c-41cc-a43a-e63af5139dc8" containerID="bb64dd8001bc36afa6de8eab0183e26ee4663489abe299392f350142d6fcb6f8" exitCode=0 Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.406442 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" event={"ID":"962154cf-811c-41cc-a43a-e63af5139dc8","Type":"ContainerDied","Data":"bb64dd8001bc36afa6de8eab0183e26ee4663489abe299392f350142d6fcb6f8"} Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.407642 4892 generic.go:334] "Generic (PLEG): container finished" podID="b08dee6e-948e-480a-8642-ee350e0a05f1" containerID="1bd370dcccf7e6a6d737be0642b7b4c16ecbf0b887c971f8f29a9fb94836a665" exitCode=0 Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.407691 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" event={"ID":"b08dee6e-948e-480a-8642-ee350e0a05f1","Type":"ContainerDied","Data":"1bd370dcccf7e6a6d737be0642b7b4c16ecbf0b887c971f8f29a9fb94836a665"} Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.415417 4892 generic.go:334] "Generic (PLEG): container finished" podID="f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" containerID="3773b362a7a23bf787f49f103b98b8f0498a36edd2ddd34639b7d7dd4069321a" exitCode=0 Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.415531 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3d72-account-create-update-phg7j" event={"ID":"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb","Type":"ContainerDied","Data":"3773b362a7a23bf787f49f103b98b8f0498a36edd2ddd34639b7d7dd4069321a"} Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.431090 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxmk9" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.433614 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac9e125-77fe-4415-b124-bdd6816b313d" path="/var/lib/kubelet/pods/8ac9e125-77fe-4415-b124-bdd6816b313d/volumes" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.434558 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxmk9" event={"ID":"35dbe302-2a50-49e2-874a-b3bfe80bb483","Type":"ContainerDied","Data":"c32bb28020d807b943160084f5265efc3f4ee2013c18b3d01b75446bf00091eb"} Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.434581 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32bb28020d807b943160084f5265efc3f4ee2013c18b3d01b75446bf00091eb" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.840857 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.925756 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.932706 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.936865 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-combined-ca-bundle\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.936927 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-scripts\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.937151 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-public-tls-certs\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.937228 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-config-data\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.938302 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.938383 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-logs\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.938437 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-httpd-run\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.938458 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktjv\" (UniqueName: \"kubernetes.io/projected/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-kube-api-access-tktjv\") pod \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\" (UID: \"ce40113d-7ce5-4cff-b5e4-6d84102a6af6\") " Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.946539 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.946457 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-logs" (OuterVolumeSpecName: "logs") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.951077 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-scripts" (OuterVolumeSpecName: "scripts") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.952319 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:29:27 crc kubenswrapper[4892]: I0122 09:29:27.959261 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-kube-api-access-tktjv" (OuterVolumeSpecName: "kube-api-access-tktjv") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "kube-api-access-tktjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.018494 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.024161 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044248 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58bdf833-f1d4-4c50-9710-0453e093b082-operator-scripts\") pod \"58bdf833-f1d4-4c50-9710-0453e093b082\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044322 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx98f\" (UniqueName: \"kubernetes.io/projected/294650da-35b8-402b-86ca-722359f803f8-kube-api-access-bx98f\") pod \"294650da-35b8-402b-86ca-722359f803f8\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044343 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp4qf\" (UniqueName: \"kubernetes.io/projected/58bdf833-f1d4-4c50-9710-0453e093b082-kube-api-access-wp4qf\") pod \"58bdf833-f1d4-4c50-9710-0453e093b082\" (UID: \"58bdf833-f1d4-4c50-9710-0453e093b082\") " Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044370 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294650da-35b8-402b-86ca-722359f803f8-operator-scripts\") pod \"294650da-35b8-402b-86ca-722359f803f8\" (UID: \"294650da-35b8-402b-86ca-722359f803f8\") " Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044739 4892 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044761 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044770 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044778 4892 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044786 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktjv\" (UniqueName: \"kubernetes.io/projected/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-kube-api-access-tktjv\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044796 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.044803 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.050805 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58bdf833-f1d4-4c50-9710-0453e093b082-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58bdf833-f1d4-4c50-9710-0453e093b082" (UID: "58bdf833-f1d4-4c50-9710-0453e093b082"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.055413 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294650da-35b8-402b-86ca-722359f803f8-kube-api-access-bx98f" (OuterVolumeSpecName: "kube-api-access-bx98f") pod "294650da-35b8-402b-86ca-722359f803f8" (UID: "294650da-35b8-402b-86ca-722359f803f8"). InnerVolumeSpecName "kube-api-access-bx98f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.060630 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294650da-35b8-402b-86ca-722359f803f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "294650da-35b8-402b-86ca-722359f803f8" (UID: "294650da-35b8-402b-86ca-722359f803f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.064370 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.069408 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bdf833-f1d4-4c50-9710-0453e093b082-kube-api-access-wp4qf" (OuterVolumeSpecName: "kube-api-access-wp4qf") pod "58bdf833-f1d4-4c50-9710-0453e093b082" (UID: "58bdf833-f1d4-4c50-9710-0453e093b082"). InnerVolumeSpecName "kube-api-access-wp4qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.094385 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-config-data" (OuterVolumeSpecName: "config-data") pod "ce40113d-7ce5-4cff-b5e4-6d84102a6af6" (UID: "ce40113d-7ce5-4cff-b5e4-6d84102a6af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.153031 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce40113d-7ce5-4cff-b5e4-6d84102a6af6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.153052 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58bdf833-f1d4-4c50-9710-0453e093b082-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.153064 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx98f\" (UniqueName: \"kubernetes.io/projected/294650da-35b8-402b-86ca-722359f803f8-kube-api-access-bx98f\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.153073 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp4qf\" (UniqueName: \"kubernetes.io/projected/58bdf833-f1d4-4c50-9710-0453e093b082-kube-api-access-wp4qf\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.153082 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294650da-35b8-402b-86ca-722359f803f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.153091 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:28 crc kubenswrapper[4892]: W0122 09:29:28.160010 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0d04b37_82ff_4c76_ab88_4602d405c9e0.slice/crio-6684af3c53dde2e2d136d4823959d9b70ba13fcd6c6ad7d0c201946721fb008f WatchSource:0}: Error finding container 6684af3c53dde2e2d136d4823959d9b70ba13fcd6c6ad7d0c201946721fb008f: Status 404 returned error can't find the container with id 6684af3c53dde2e2d136d4823959d9b70ba13fcd6c6ad7d0c201946721fb008f Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.171496 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.447409 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerStarted","Data":"451ed154220b8d6db1463825dd45065f5c4a9dc2fb4292fb20dd8f7405357f0f"} Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.447793 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerStarted","Data":"6a855fe173290ab7bc40bdfdeb4813df9e4c4f9d8ae704973ee2706ddf2c78a8"} Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.451684 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ltg2c" event={"ID":"294650da-35b8-402b-86ca-722359f803f8","Type":"ContainerDied","Data":"5ce4cea95f10bf9d950cb9cba36a32cbf832af4402c20d59083a72b9439624e7"} Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.451711 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ltg2c" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.451737 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce4cea95f10bf9d950cb9cba36a32cbf832af4402c20d59083a72b9439624e7" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.453323 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w9xkt" event={"ID":"58bdf833-f1d4-4c50-9710-0453e093b082","Type":"ContainerDied","Data":"8fcd5cb3f59fcf5e66db7fff00f397634670b9568736e2825d727b5f6f394693"} Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.453341 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fcd5cb3f59fcf5e66db7fff00f397634670b9568736e2825d727b5f6f394693" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.453391 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w9xkt" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.458655 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0d04b37-82ff-4c76-ab88-4602d405c9e0","Type":"ContainerStarted","Data":"6684af3c53dde2e2d136d4823959d9b70ba13fcd6c6ad7d0c201946721fb008f"} Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.460769 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce40113d-7ce5-4cff-b5e4-6d84102a6af6","Type":"ContainerDied","Data":"0309c9e5797bef6b95216335d0f92d2f6d5119762352354572637ca446a799d6"} Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.460831 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.460834 4892 scope.go:117] "RemoveContainer" containerID="a3ac5d889501d793e854a738c29613cf750642c47f17f3b68816dd1170ca9657" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.510621 4892 scope.go:117] "RemoveContainer" containerID="c0f099de934d3e1136d9e241dc36c5aca540c063e025dfd9576ea837ebfdedea" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.529269 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.547267 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563045 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:29:28 crc kubenswrapper[4892]: E0122 09:29:28.563503 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-httpd" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563515 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-httpd" Jan 22 09:29:28 crc kubenswrapper[4892]: E0122 09:29:28.563575 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bdf833-f1d4-4c50-9710-0453e093b082" containerName="mariadb-database-create" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563581 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bdf833-f1d4-4c50-9710-0453e093b082" containerName="mariadb-database-create" Jan 22 09:29:28 crc kubenswrapper[4892]: E0122 09:29:28.563601 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294650da-35b8-402b-86ca-722359f803f8" containerName="mariadb-database-create" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563608 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="294650da-35b8-402b-86ca-722359f803f8" containerName="mariadb-database-create" Jan 22 09:29:28 crc kubenswrapper[4892]: E0122 09:29:28.563618 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-log" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563623 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-log" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563795 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-httpd" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563805 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" containerName="glance-log" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563818 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="294650da-35b8-402b-86ca-722359f803f8" containerName="mariadb-database-create" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.563826 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bdf833-f1d4-4c50-9710-0453e093b082" containerName="mariadb-database-create" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.564803 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.572746 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.573461 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.585133 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666510 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-scripts\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666552 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666585 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666689 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-config-data\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666722 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-logs\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666747 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666767 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.666819 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64l6m\" (UniqueName: \"kubernetes.io/projected/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-kube-api-access-64l6m\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768575 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-logs\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768636 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768669 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768744 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64l6m\" (UniqueName: \"kubernetes.io/projected/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-kube-api-access-64l6m\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768813 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-scripts\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768839 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768871 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.768919 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-config-data\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.769804 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.772649 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.772735 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-logs\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.778046 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.778051 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.778745 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-config-data\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.786624 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-scripts\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.786966 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64l6m\" (UniqueName: \"kubernetes.io/projected/e16d2673-ef7d-40c6-b1ae-c43fc8771d30-kube-api-access-64l6m\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.806925 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"e16d2673-ef7d-40c6-b1ae-c43fc8771d30\") " pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.893807 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:29:28 crc kubenswrapper[4892]: I0122 09:29:28.980262 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.031009 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.038404 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.075570 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hccgm\" (UniqueName: \"kubernetes.io/projected/962154cf-811c-41cc-a43a-e63af5139dc8-kube-api-access-hccgm\") pod \"962154cf-811c-41cc-a43a-e63af5139dc8\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.075963 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hztb\" (UniqueName: \"kubernetes.io/projected/b08dee6e-948e-480a-8642-ee350e0a05f1-kube-api-access-7hztb\") pod \"b08dee6e-948e-480a-8642-ee350e0a05f1\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.076003 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08dee6e-948e-480a-8642-ee350e0a05f1-operator-scripts\") pod \"b08dee6e-948e-480a-8642-ee350e0a05f1\" (UID: \"b08dee6e-948e-480a-8642-ee350e0a05f1\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.076062 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-operator-scripts\") pod \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.076192 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962154cf-811c-41cc-a43a-e63af5139dc8-operator-scripts\") pod \"962154cf-811c-41cc-a43a-e63af5139dc8\" (UID: \"962154cf-811c-41cc-a43a-e63af5139dc8\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.076913 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08dee6e-948e-480a-8642-ee350e0a05f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b08dee6e-948e-480a-8642-ee350e0a05f1" (UID: "b08dee6e-948e-480a-8642-ee350e0a05f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.078206 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" (UID: "f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.080484 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962154cf-811c-41cc-a43a-e63af5139dc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "962154cf-811c-41cc-a43a-e63af5139dc8" (UID: "962154cf-811c-41cc-a43a-e63af5139dc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.080843 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08dee6e-948e-480a-8642-ee350e0a05f1-kube-api-access-7hztb" (OuterVolumeSpecName: "kube-api-access-7hztb") pod "b08dee6e-948e-480a-8642-ee350e0a05f1" (UID: "b08dee6e-948e-480a-8642-ee350e0a05f1"). InnerVolumeSpecName "kube-api-access-7hztb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.080914 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962154cf-811c-41cc-a43a-e63af5139dc8-kube-api-access-hccgm" (OuterVolumeSpecName: "kube-api-access-hccgm") pod "962154cf-811c-41cc-a43a-e63af5139dc8" (UID: "962154cf-811c-41cc-a43a-e63af5139dc8"). InnerVolumeSpecName "kube-api-access-hccgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.178078 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwpzn\" (UniqueName: \"kubernetes.io/projected/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-kube-api-access-fwpzn\") pod \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\" (UID: \"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.178473 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.178488 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962154cf-811c-41cc-a43a-e63af5139dc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.178497 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hccgm\" (UniqueName: \"kubernetes.io/projected/962154cf-811c-41cc-a43a-e63af5139dc8-kube-api-access-hccgm\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.178508 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hztb\" (UniqueName: \"kubernetes.io/projected/b08dee6e-948e-480a-8642-ee350e0a05f1-kube-api-access-7hztb\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.178516 4892 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08dee6e-948e-480a-8642-ee350e0a05f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.183691 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-kube-api-access-fwpzn" (OuterVolumeSpecName: "kube-api-access-fwpzn") pod "f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" (UID: "f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb"). InnerVolumeSpecName "kube-api-access-fwpzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.281245 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwpzn\" (UniqueName: \"kubernetes.io/projected/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb-kube-api-access-fwpzn\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.430744 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce40113d-7ce5-4cff-b5e4-6d84102a6af6" path="/var/lib/kubelet/pods/ce40113d-7ce5-4cff-b5e4-6d84102a6af6/volumes" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.477140 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" event={"ID":"b08dee6e-948e-480a-8642-ee350e0a05f1","Type":"ContainerDied","Data":"cb06989dcc3cb2180432d0cc9577d9e51d124d688cd37cbb95358245c38f832b"} Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.477447 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb06989dcc3cb2180432d0cc9577d9e51d124d688cd37cbb95358245c38f832b" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.477519 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6c9a-account-create-update-8ldll" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.487360 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3d72-account-create-update-phg7j" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.487363 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3d72-account-create-update-phg7j" event={"ID":"f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb","Type":"ContainerDied","Data":"71db7fc7c319a89d967518dc660f97f677512a832f82ea5c643c2e16a9ebcf08"} Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.487411 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71db7fc7c319a89d967518dc660f97f677512a832f82ea5c643c2e16a9ebcf08" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.502496 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" event={"ID":"962154cf-811c-41cc-a43a-e63af5139dc8","Type":"ContainerDied","Data":"6bda0f74e6f43704170ebd6a158ed142bd853844fdcf10dfe2600ceefc7a075d"} Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.502561 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bda0f74e6f43704170ebd6a158ed142bd853844fdcf10dfe2600ceefc7a075d" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.502584 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f45c-account-create-update-j2b4k" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.510017 4892 generic.go:334] "Generic (PLEG): container finished" podID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerID="29a40a09756cba7b3751e000919e2a9027e341f2b2d60abf1bab59b34c64bafb" exitCode=137 Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.510083 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79777d5484-zk25q" event={"ID":"c837fca4-ae2c-43fd-850c-f2aca8331d27","Type":"ContainerDied","Data":"29a40a09756cba7b3751e000919e2a9027e341f2b2d60abf1bab59b34c64bafb"} Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.511652 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0d04b37-82ff-4c76-ab88-4602d405c9e0","Type":"ContainerStarted","Data":"c9dcbb37021eb4c733da71c3183ac4a79dc43a306a06899ee9b14f423e8f60e2"} Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.658328 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.731845 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792329 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c837fca4-ae2c-43fd-850c-f2aca8331d27-logs\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792424 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq6km\" (UniqueName: \"kubernetes.io/projected/c837fca4-ae2c-43fd-850c-f2aca8331d27-kube-api-access-bq6km\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792476 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-combined-ca-bundle\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792547 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-scripts\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792567 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-secret-key\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792608 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-config-data\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.792700 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-tls-certs\") pod \"c837fca4-ae2c-43fd-850c-f2aca8331d27\" (UID: \"c837fca4-ae2c-43fd-850c-f2aca8331d27\") " Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.794707 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c837fca4-ae2c-43fd-850c-f2aca8331d27-logs" (OuterVolumeSpecName: "logs") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.801225 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c837fca4-ae2c-43fd-850c-f2aca8331d27-kube-api-access-bq6km" (OuterVolumeSpecName: "kube-api-access-bq6km") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "kube-api-access-bq6km". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.802118 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.829925 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.830756 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-scripts" (OuterVolumeSpecName: "scripts") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.845217 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-config-data" (OuterVolumeSpecName: "config-data") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.862933 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c837fca4-ae2c-43fd-850c-f2aca8331d27" (UID: "c837fca4-ae2c-43fd-850c-f2aca8331d27"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894870 4892 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894901 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c837fca4-ae2c-43fd-850c-f2aca8331d27-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894931 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq6km\" (UniqueName: \"kubernetes.io/projected/c837fca4-ae2c-43fd-850c-f2aca8331d27-kube-api-access-bq6km\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894942 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894950 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894958 4892 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c837fca4-ae2c-43fd-850c-f2aca8331d27-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:29 crc kubenswrapper[4892]: I0122 09:29:29.894967 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c837fca4-ae2c-43fd-850c-f2aca8331d27-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.541579 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79777d5484-zk25q" event={"ID":"c837fca4-ae2c-43fd-850c-f2aca8331d27","Type":"ContainerDied","Data":"007f9513e989fb532fef7d363ef4362fac0b90a50ff7a3e628f1010a985c6af1"} Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.541649 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79777d5484-zk25q" Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.541894 4892 scope.go:117] "RemoveContainer" containerID="a517a1366ff75b3114aba08a5cd170cff1a9a46111a0373888655bd0a308fa5e" Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.547852 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e16d2673-ef7d-40c6-b1ae-c43fc8771d30","Type":"ContainerStarted","Data":"22d8c1d56a2507bb6b39ac8393201e0eb8c6596dd7837d677b4ac215b982a1f8"} Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.547895 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e16d2673-ef7d-40c6-b1ae-c43fc8771d30","Type":"ContainerStarted","Data":"eb6f4936d1c34e2da7d8380cbf92f8fa373de9f2bace51f2f3495a9e05667550"} Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.550723 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0d04b37-82ff-4c76-ab88-4602d405c9e0","Type":"ContainerStarted","Data":"d13e2cfabfb2de0668490941a6b098cebcbb9ed50f72a62f478a987dbc0bf0d8"} Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.559086 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerStarted","Data":"930e3285fd5ea20f821d740427051e575da4a0f88357f591ccb91867e1f48efd"} Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.559232 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-central-agent" containerID="cri-o://44858298fbb514f65a32b615af3c0284132892fe56df411d8b445a419753819d" gracePeriod=30 Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.559446 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.559487 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="proxy-httpd" containerID="cri-o://930e3285fd5ea20f821d740427051e575da4a0f88357f591ccb91867e1f48efd" gracePeriod=30 Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.559528 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="sg-core" containerID="cri-o://451ed154220b8d6db1463825dd45065f5c4a9dc2fb4292fb20dd8f7405357f0f" gracePeriod=30 Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.559565 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-notification-agent" containerID="cri-o://6a855fe173290ab7bc40bdfdeb4813df9e4c4f9d8ae704973ee2706ddf2c78a8" gracePeriod=30 Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.570708 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.570694435 podStartE2EDuration="4.570694435s" podCreationTimestamp="2026-01-22 09:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:30.570127722 +0000 UTC m=+1140.414206785" watchObservedRunningTime="2026-01-22 09:29:30.570694435 +0000 UTC m=+1140.414773498" Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.600701 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79777d5484-zk25q"] Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.614377 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79777d5484-zk25q"] Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.620230 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.57608043 podStartE2EDuration="8.620206637s" podCreationTimestamp="2026-01-22 09:29:22 +0000 UTC" firstStartedPulling="2026-01-22 09:29:23.193264496 +0000 UTC m=+1133.037343549" lastFinishedPulling="2026-01-22 09:29:29.237390693 +0000 UTC m=+1139.081469756" observedRunningTime="2026-01-22 09:29:30.613588517 +0000 UTC m=+1140.457667580" watchObservedRunningTime="2026-01-22 09:29:30.620206637 +0000 UTC m=+1140.464285700" Jan 22 09:29:30 crc kubenswrapper[4892]: I0122 09:29:30.789514 4892 scope.go:117] "RemoveContainer" containerID="29a40a09756cba7b3751e000919e2a9027e341f2b2d60abf1bab59b34c64bafb" Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.436106 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" path="/var/lib/kubelet/pods/c837fca4-ae2c-43fd-850c-f2aca8331d27/volumes" Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.577280 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e16d2673-ef7d-40c6-b1ae-c43fc8771d30","Type":"ContainerStarted","Data":"ec39c8448972971f59709eb5ffa0615a73ca5e4dbc1a919cbe28f01050b2580a"} Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.583484 4892 generic.go:334] "Generic (PLEG): container finished" podID="e806d349-4c1e-4dc9-836a-ece0da878110" containerID="930e3285fd5ea20f821d740427051e575da4a0f88357f591ccb91867e1f48efd" exitCode=0 Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.583524 4892 generic.go:334] "Generic (PLEG): container finished" podID="e806d349-4c1e-4dc9-836a-ece0da878110" containerID="451ed154220b8d6db1463825dd45065f5c4a9dc2fb4292fb20dd8f7405357f0f" exitCode=2 Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.583535 4892 generic.go:334] "Generic (PLEG): container finished" podID="e806d349-4c1e-4dc9-836a-ece0da878110" containerID="6a855fe173290ab7bc40bdfdeb4813df9e4c4f9d8ae704973ee2706ddf2c78a8" exitCode=0 Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.583549 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerDied","Data":"930e3285fd5ea20f821d740427051e575da4a0f88357f591ccb91867e1f48efd"} Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.583588 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerDied","Data":"451ed154220b8d6db1463825dd45065f5c4a9dc2fb4292fb20dd8f7405357f0f"} Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.583601 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerDied","Data":"6a855fe173290ab7bc40bdfdeb4813df9e4c4f9d8ae704973ee2706ddf2c78a8"} Jan 22 09:29:31 crc kubenswrapper[4892]: I0122 09:29:31.597547 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.59752128 podStartE2EDuration="3.59752128s" podCreationTimestamp="2026-01-22 09:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:29:31.595341327 +0000 UTC m=+1141.439420400" watchObservedRunningTime="2026-01-22 09:29:31.59752128 +0000 UTC m=+1141.441600343" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.495513 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hwvgb"] Jan 22 09:29:34 crc kubenswrapper[4892]: E0122 09:29:34.496449 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08dee6e-948e-480a-8642-ee350e0a05f1" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496463 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08dee6e-948e-480a-8642-ee350e0a05f1" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: E0122 09:29:34.496477 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon-log" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496483 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon-log" Jan 22 09:29:34 crc kubenswrapper[4892]: E0122 09:29:34.496496 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496503 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: E0122 09:29:34.496525 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962154cf-811c-41cc-a43a-e63af5139dc8" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496531 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="962154cf-811c-41cc-a43a-e63af5139dc8" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: E0122 09:29:34.496537 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496543 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496715 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="962154cf-811c-41cc-a43a-e63af5139dc8" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496726 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon-log" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496735 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496744 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08dee6e-948e-480a-8642-ee350e0a05f1" containerName="mariadb-account-create-update" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.496756 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c837fca4-ae2c-43fd-850c-f2aca8331d27" containerName="horizon" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.497411 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.515394 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w6rft" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.515824 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.516035 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.552235 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hwvgb"] Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.571298 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-scripts\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.571365 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.571414 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-config-data\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.571449 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cpx\" (UniqueName: \"kubernetes.io/projected/606e5e49-0a85-4337-8aa2-12216467367e-kube-api-access-r8cpx\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.610575 4892 generic.go:334] "Generic (PLEG): container finished" podID="e806d349-4c1e-4dc9-836a-ece0da878110" containerID="44858298fbb514f65a32b615af3c0284132892fe56df411d8b445a419753819d" exitCode=0 Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.610610 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerDied","Data":"44858298fbb514f65a32b615af3c0284132892fe56df411d8b445a419753819d"} Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.610633 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e806d349-4c1e-4dc9-836a-ece0da878110","Type":"ContainerDied","Data":"b1186d8935a699cc500289b8dee01237c9bf9f04c0e9b339fc6331b1ad63e653"} Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.610643 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1186d8935a699cc500289b8dee01237c9bf9f04c0e9b339fc6331b1ad63e653" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.667312 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.672529 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.672576 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-config-data\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.672613 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cpx\" (UniqueName: \"kubernetes.io/projected/606e5e49-0a85-4337-8aa2-12216467367e-kube-api-access-r8cpx\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.672694 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-scripts\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.679646 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-scripts\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.679737 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-config-data\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.690375 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.695403 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cpx\" (UniqueName: \"kubernetes.io/projected/606e5e49-0a85-4337-8aa2-12216467367e-kube-api-access-r8cpx\") pod \"nova-cell0-conductor-db-sync-hwvgb\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.773622 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-run-httpd\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.773750 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-combined-ca-bundle\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.773826 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-scripts\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.773894 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-sg-core-conf-yaml\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.774184 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.774873 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wttjg\" (UniqueName: \"kubernetes.io/projected/e806d349-4c1e-4dc9-836a-ece0da878110-kube-api-access-wttjg\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.774898 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-log-httpd\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.774934 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-config-data\") pod \"e806d349-4c1e-4dc9-836a-ece0da878110\" (UID: \"e806d349-4c1e-4dc9-836a-ece0da878110\") " Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.775325 4892 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.776138 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.778174 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-scripts" (OuterVolumeSpecName: "scripts") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.779953 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e806d349-4c1e-4dc9-836a-ece0da878110-kube-api-access-wttjg" (OuterVolumeSpecName: "kube-api-access-wttjg") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "kube-api-access-wttjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.816490 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.854492 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.854928 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.876365 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.876395 4892 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.876408 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wttjg\" (UniqueName: \"kubernetes.io/projected/e806d349-4c1e-4dc9-836a-ece0da878110-kube-api-access-wttjg\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.876420 4892 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e806d349-4c1e-4dc9-836a-ece0da878110-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.876430 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.880039 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-config-data" (OuterVolumeSpecName: "config-data") pod "e806d349-4c1e-4dc9-836a-ece0da878110" (UID: "e806d349-4c1e-4dc9-836a-ece0da878110"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:34 crc kubenswrapper[4892]: I0122 09:29:34.990037 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e806d349-4c1e-4dc9-836a-ece0da878110-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.329682 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hwvgb"] Jan 22 09:29:35 crc kubenswrapper[4892]: W0122 09:29:35.335642 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod606e5e49_0a85_4337_8aa2_12216467367e.slice/crio-5b984d68e35a84ece0c196369334568bbbfca99391ad835a2e31c4b537136f4a WatchSource:0}: Error finding container 5b984d68e35a84ece0c196369334568bbbfca99391ad835a2e31c4b537136f4a: Status 404 returned error can't find the container with id 5b984d68e35a84ece0c196369334568bbbfca99391ad835a2e31c4b537136f4a Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.618322 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" event={"ID":"606e5e49-0a85-4337-8aa2-12216467367e","Type":"ContainerStarted","Data":"5b984d68e35a84ece0c196369334568bbbfca99391ad835a2e31c4b537136f4a"} Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.618378 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.642444 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.654677 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673258 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:35 crc kubenswrapper[4892]: E0122 09:29:35.673632 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="proxy-httpd" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673645 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="proxy-httpd" Jan 22 09:29:35 crc kubenswrapper[4892]: E0122 09:29:35.673662 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-notification-agent" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673670 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-notification-agent" Jan 22 09:29:35 crc kubenswrapper[4892]: E0122 09:29:35.673694 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-central-agent" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673700 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-central-agent" Jan 22 09:29:35 crc kubenswrapper[4892]: E0122 09:29:35.673708 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="sg-core" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673714 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="sg-core" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673897 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-notification-agent" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673922 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="ceilometer-central-agent" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673939 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="proxy-httpd" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.673951 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" containerName="sg-core" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.675489 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.677271 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.677275 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.680997 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.715786 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-scripts\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.715876 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-log-httpd\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.715904 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsdxm\" (UniqueName: \"kubernetes.io/projected/f872bc76-803a-44e5-90a0-b29a5d6b94a6-kube-api-access-hsdxm\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.715950 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-run-httpd\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.716060 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.716108 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-config-data\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.716144 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.817710 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-scripts\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818057 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-log-httpd\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818087 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsdxm\" (UniqueName: \"kubernetes.io/projected/f872bc76-803a-44e5-90a0-b29a5d6b94a6-kube-api-access-hsdxm\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818132 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-run-httpd\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818166 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818194 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-config-data\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818214 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818675 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-log-httpd\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.818724 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-run-httpd\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.822894 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-scripts\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.823257 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.824159 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-config-data\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.825213 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.848299 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsdxm\" (UniqueName: \"kubernetes.io/projected/f872bc76-803a-44e5-90a0-b29a5d6b94a6-kube-api-access-hsdxm\") pod \"ceilometer-0\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " pod="openstack/ceilometer-0" Jan 22 09:29:35 crc kubenswrapper[4892]: I0122 09:29:35.991396 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:36 crc kubenswrapper[4892]: I0122 09:29:36.688792 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:36 crc kubenswrapper[4892]: W0122 09:29:36.709550 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf872bc76_803a_44e5_90a0_b29a5d6b94a6.slice/crio-2bdd80773e08ed24420dc54259e9f84febff862ac352559cdb563e66ff1be5de WatchSource:0}: Error finding container 2bdd80773e08ed24420dc54259e9f84febff862ac352559cdb563e66ff1be5de: Status 404 returned error can't find the container with id 2bdd80773e08ed24420dc54259e9f84febff862ac352559cdb563e66ff1be5de Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.159183 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.159560 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.207807 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.212542 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.409121 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.436922 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e806d349-4c1e-4dc9-836a-ece0da878110" path="/var/lib/kubelet/pods/e806d349-4c1e-4dc9-836a-ece0da878110/volumes" Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.638675 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerStarted","Data":"1ad34a3dfadd174e28013056be367bd82aef61b2d55e8dfefc09ec1293c54530"} Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.638717 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerStarted","Data":"2bdd80773e08ed24420dc54259e9f84febff862ac352559cdb563e66ff1be5de"} Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.638738 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:37 crc kubenswrapper[4892]: I0122 09:29:37.638873 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:38 crc kubenswrapper[4892]: I0122 09:29:38.655110 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerStarted","Data":"895008d81085733fc1f126d6d7c4ae2eaef4f1e040c700a4f31ba015df65f41d"} Jan 22 09:29:38 crc kubenswrapper[4892]: I0122 09:29:38.894149 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 09:29:38 crc kubenswrapper[4892]: I0122 09:29:38.894494 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 09:29:38 crc kubenswrapper[4892]: I0122 09:29:38.932608 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 09:29:38 crc kubenswrapper[4892]: I0122 09:29:38.942506 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 09:29:39 crc kubenswrapper[4892]: I0122 09:29:39.500073 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:39 crc kubenswrapper[4892]: I0122 09:29:39.502571 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 09:29:39 crc kubenswrapper[4892]: I0122 09:29:39.663228 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 09:29:39 crc kubenswrapper[4892]: I0122 09:29:39.663273 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 09:29:41 crc kubenswrapper[4892]: I0122 09:29:41.780623 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 09:29:41 crc kubenswrapper[4892]: I0122 09:29:41.780749 4892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:29:41 crc kubenswrapper[4892]: I0122 09:29:41.782010 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 09:29:45 crc kubenswrapper[4892]: I0122 09:29:45.722672 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerStarted","Data":"f00a3f9b139af32403690fb26175ee2e4e57b34942825cfee3aa1a1977d0ec6b"} Jan 22 09:29:45 crc kubenswrapper[4892]: I0122 09:29:45.725871 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" event={"ID":"606e5e49-0a85-4337-8aa2-12216467367e","Type":"ContainerStarted","Data":"633c083489e5e6b148a2be15a7b2c054065f30aee0fd2bbf3554e768144ef9f1"} Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.752510 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerStarted","Data":"4139532e79d6f72e163154c717677701af63f52d2b01808b381938eb716c2876"} Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.753121 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.752852 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-central-agent" containerID="cri-o://1ad34a3dfadd174e28013056be367bd82aef61b2d55e8dfefc09ec1293c54530" gracePeriod=30 Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.753237 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="proxy-httpd" containerID="cri-o://4139532e79d6f72e163154c717677701af63f52d2b01808b381938eb716c2876" gracePeriod=30 Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.753333 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="sg-core" containerID="cri-o://f00a3f9b139af32403690fb26175ee2e4e57b34942825cfee3aa1a1977d0ec6b" gracePeriod=30 Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.753381 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-notification-agent" containerID="cri-o://895008d81085733fc1f126d6d7c4ae2eaef4f1e040c700a4f31ba015df65f41d" gracePeriod=30 Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.791812 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.618267801 podStartE2EDuration="12.791797841s" podCreationTimestamp="2026-01-22 09:29:35 +0000 UTC" firstStartedPulling="2026-01-22 09:29:36.712129846 +0000 UTC m=+1146.556208909" lastFinishedPulling="2026-01-22 09:29:46.885659856 +0000 UTC m=+1156.729738949" observedRunningTime="2026-01-22 09:29:47.787934617 +0000 UTC m=+1157.632013690" watchObservedRunningTime="2026-01-22 09:29:47.791797841 +0000 UTC m=+1157.635876904" Jan 22 09:29:47 crc kubenswrapper[4892]: I0122 09:29:47.795229 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" podStartSLOduration=3.876588941 podStartE2EDuration="13.795218344s" podCreationTimestamp="2026-01-22 09:29:34 +0000 UTC" firstStartedPulling="2026-01-22 09:29:35.338269638 +0000 UTC m=+1145.182348691" lastFinishedPulling="2026-01-22 09:29:45.256899031 +0000 UTC m=+1155.100978094" observedRunningTime="2026-01-22 09:29:45.752721716 +0000 UTC m=+1155.596800779" watchObservedRunningTime="2026-01-22 09:29:47.795218344 +0000 UTC m=+1157.639297407" Jan 22 09:29:48 crc kubenswrapper[4892]: I0122 09:29:48.771820 4892 generic.go:334] "Generic (PLEG): container finished" podID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerID="4139532e79d6f72e163154c717677701af63f52d2b01808b381938eb716c2876" exitCode=0 Jan 22 09:29:48 crc kubenswrapper[4892]: I0122 09:29:48.772952 4892 generic.go:334] "Generic (PLEG): container finished" podID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerID="f00a3f9b139af32403690fb26175ee2e4e57b34942825cfee3aa1a1977d0ec6b" exitCode=2 Jan 22 09:29:48 crc kubenswrapper[4892]: I0122 09:29:48.771922 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerDied","Data":"4139532e79d6f72e163154c717677701af63f52d2b01808b381938eb716c2876"} Jan 22 09:29:48 crc kubenswrapper[4892]: I0122 09:29:48.773129 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerDied","Data":"f00a3f9b139af32403690fb26175ee2e4e57b34942825cfee3aa1a1977d0ec6b"} Jan 22 09:29:48 crc kubenswrapper[4892]: I0122 09:29:48.773167 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerDied","Data":"895008d81085733fc1f126d6d7c4ae2eaef4f1e040c700a4f31ba015df65f41d"} Jan 22 09:29:48 crc kubenswrapper[4892]: I0122 09:29:48.773052 4892 generic.go:334] "Generic (PLEG): container finished" podID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerID="895008d81085733fc1f126d6d7c4ae2eaef4f1e040c700a4f31ba015df65f41d" exitCode=0 Jan 22 09:29:49 crc kubenswrapper[4892]: I0122 09:29:49.789898 4892 generic.go:334] "Generic (PLEG): container finished" podID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerID="1ad34a3dfadd174e28013056be367bd82aef61b2d55e8dfefc09ec1293c54530" exitCode=0 Jan 22 09:29:49 crc kubenswrapper[4892]: I0122 09:29:49.789966 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerDied","Data":"1ad34a3dfadd174e28013056be367bd82aef61b2d55e8dfefc09ec1293c54530"} Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.433714 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516215 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-combined-ca-bundle\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516356 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-log-httpd\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516410 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-run-httpd\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516453 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-scripts\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516530 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsdxm\" (UniqueName: \"kubernetes.io/projected/f872bc76-803a-44e5-90a0-b29a5d6b94a6-kube-api-access-hsdxm\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516568 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-config-data\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516644 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-sg-core-conf-yaml\") pod \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\" (UID: \"f872bc76-803a-44e5-90a0-b29a5d6b94a6\") " Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.516975 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.517278 4892 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.517960 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.521642 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-scripts" (OuterVolumeSpecName: "scripts") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.521785 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f872bc76-803a-44e5-90a0-b29a5d6b94a6-kube-api-access-hsdxm" (OuterVolumeSpecName: "kube-api-access-hsdxm") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "kube-api-access-hsdxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.541867 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.582156 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.618571 4892 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f872bc76-803a-44e5-90a0-b29a5d6b94a6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.618616 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.618630 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsdxm\" (UniqueName: \"kubernetes.io/projected/f872bc76-803a-44e5-90a0-b29a5d6b94a6-kube-api-access-hsdxm\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.618647 4892 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.618659 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.636489 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-config-data" (OuterVolumeSpecName: "config-data") pod "f872bc76-803a-44e5-90a0-b29a5d6b94a6" (UID: "f872bc76-803a-44e5-90a0-b29a5d6b94a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.720478 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f872bc76-803a-44e5-90a0-b29a5d6b94a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.806661 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f872bc76-803a-44e5-90a0-b29a5d6b94a6","Type":"ContainerDied","Data":"2bdd80773e08ed24420dc54259e9f84febff862ac352559cdb563e66ff1be5de"} Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.806745 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.807110 4892 scope.go:117] "RemoveContainer" containerID="4139532e79d6f72e163154c717677701af63f52d2b01808b381938eb716c2876" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.850274 4892 scope.go:117] "RemoveContainer" containerID="f00a3f9b139af32403690fb26175ee2e4e57b34942825cfee3aa1a1977d0ec6b" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.851941 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.863457 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.880390 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:50 crc kubenswrapper[4892]: E0122 09:29:50.881116 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="sg-core" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.881262 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="sg-core" Jan 22 09:29:50 crc kubenswrapper[4892]: E0122 09:29:50.881351 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-central-agent" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.881413 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-central-agent" Jan 22 09:29:50 crc kubenswrapper[4892]: E0122 09:29:50.881481 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="proxy-httpd" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.881530 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="proxy-httpd" Jan 22 09:29:50 crc kubenswrapper[4892]: E0122 09:29:50.881595 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-notification-agent" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.881879 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-notification-agent" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.882169 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-notification-agent" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.882240 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="proxy-httpd" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.882314 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="ceilometer-central-agent" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.882423 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" containerName="sg-core" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.884226 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.887439 4892 scope.go:117] "RemoveContainer" containerID="895008d81085733fc1f126d6d7c4ae2eaef4f1e040c700a4f31ba015df65f41d" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.893015 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.893180 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.908871 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:50 crc kubenswrapper[4892]: I0122 09:29:50.911479 4892 scope.go:117] "RemoveContainer" containerID="1ad34a3dfadd174e28013056be367bd82aef61b2d55e8dfefc09ec1293c54530" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.025926 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-scripts\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.026365 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6sd\" (UniqueName: \"kubernetes.io/projected/a52996eb-0df4-4f58-9f44-186a19c25555-kube-api-access-5n6sd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.026485 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.026584 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.026706 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-log-httpd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.026797 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-config-data\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.026887 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-run-httpd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.128884 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6sd\" (UniqueName: \"kubernetes.io/projected/a52996eb-0df4-4f58-9f44-186a19c25555-kube-api-access-5n6sd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.129141 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.129213 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.129338 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-log-httpd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.129429 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-config-data\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.129498 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-run-httpd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.129610 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-scripts\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.131052 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-run-httpd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.131061 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-log-httpd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.134610 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-scripts\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.134813 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-config-data\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.136360 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.136771 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.149987 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6sd\" (UniqueName: \"kubernetes.io/projected/a52996eb-0df4-4f58-9f44-186a19c25555-kube-api-access-5n6sd\") pod \"ceilometer-0\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.203792 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.429844 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f872bc76-803a-44e5-90a0-b29a5d6b94a6" path="/var/lib/kubelet/pods/f872bc76-803a-44e5-90a0-b29a5d6b94a6/volumes" Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.693524 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:29:51 crc kubenswrapper[4892]: I0122 09:29:51.816920 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerStarted","Data":"0829d8f98842f78aa5990ec18866673c8dbe38f33a2cc4fcf4be4ff5e6c87743"} Jan 22 09:29:52 crc kubenswrapper[4892]: I0122 09:29:52.843143 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerStarted","Data":"ab01d620c9250b713d766214a10e344eec5cee81aff018cfc5569bc384782257"} Jan 22 09:29:53 crc kubenswrapper[4892]: I0122 09:29:53.857065 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerStarted","Data":"61b5fb18f1d0ea77e2a6f67803bf1ae56e54123ce84b8a3497473c386a59e3b6"} Jan 22 09:29:53 crc kubenswrapper[4892]: I0122 09:29:53.857649 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerStarted","Data":"9b20bcd741a6581cfe6e10b52023336281a8eaaca91c9cdfe5c436015e86d937"} Jan 22 09:29:56 crc kubenswrapper[4892]: I0122 09:29:56.894205 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerStarted","Data":"6c3f030b07aaf1ba33c787866c6674bb9383137538173f28af52c6888e5ef677"} Jan 22 09:29:56 crc kubenswrapper[4892]: I0122 09:29:56.896801 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:29:56 crc kubenswrapper[4892]: I0122 09:29:56.920519 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7794398449999997 podStartE2EDuration="6.92049879s" podCreationTimestamp="2026-01-22 09:29:50 +0000 UTC" firstStartedPulling="2026-01-22 09:29:51.707551297 +0000 UTC m=+1161.551630360" lastFinishedPulling="2026-01-22 09:29:55.848610242 +0000 UTC m=+1165.692689305" observedRunningTime="2026-01-22 09:29:56.914521815 +0000 UTC m=+1166.758600868" watchObservedRunningTime="2026-01-22 09:29:56.92049879 +0000 UTC m=+1166.764577853" Jan 22 09:29:58 crc kubenswrapper[4892]: I0122 09:29:58.918941 4892 generic.go:334] "Generic (PLEG): container finished" podID="606e5e49-0a85-4337-8aa2-12216467367e" containerID="633c083489e5e6b148a2be15a7b2c054065f30aee0fd2bbf3554e768144ef9f1" exitCode=0 Jan 22 09:29:58 crc kubenswrapper[4892]: I0122 09:29:58.919035 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" event={"ID":"606e5e49-0a85-4337-8aa2-12216467367e","Type":"ContainerDied","Data":"633c083489e5e6b148a2be15a7b2c054065f30aee0fd2bbf3554e768144ef9f1"} Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.160424 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9"] Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.162336 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.165779 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.170879 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9"] Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.171736 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.207544 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrzw\" (UniqueName: \"kubernetes.io/projected/69a7e907-5a2d-4c67-939b-c27548a17903-kube-api-access-xqrzw\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.207693 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69a7e907-5a2d-4c67-939b-c27548a17903-secret-volume\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.207850 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a7e907-5a2d-4c67-939b-c27548a17903-config-volume\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.309990 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a7e907-5a2d-4c67-939b-c27548a17903-config-volume\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.310055 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqrzw\" (UniqueName: \"kubernetes.io/projected/69a7e907-5a2d-4c67-939b-c27548a17903-kube-api-access-xqrzw\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.310136 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69a7e907-5a2d-4c67-939b-c27548a17903-secret-volume\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.311088 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a7e907-5a2d-4c67-939b-c27548a17903-config-volume\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.315975 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69a7e907-5a2d-4c67-939b-c27548a17903-secret-volume\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.332157 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqrzw\" (UniqueName: \"kubernetes.io/projected/69a7e907-5a2d-4c67-939b-c27548a17903-kube-api-access-xqrzw\") pod \"collect-profiles-29484570-4r2f9\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.391979 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.484553 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.520459 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-scripts\") pod \"606e5e49-0a85-4337-8aa2-12216467367e\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.520602 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8cpx\" (UniqueName: \"kubernetes.io/projected/606e5e49-0a85-4337-8aa2-12216467367e-kube-api-access-r8cpx\") pod \"606e5e49-0a85-4337-8aa2-12216467367e\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.520638 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-config-data\") pod \"606e5e49-0a85-4337-8aa2-12216467367e\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.520729 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-combined-ca-bundle\") pod \"606e5e49-0a85-4337-8aa2-12216467367e\" (UID: \"606e5e49-0a85-4337-8aa2-12216467367e\") " Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.525525 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606e5e49-0a85-4337-8aa2-12216467367e-kube-api-access-r8cpx" (OuterVolumeSpecName: "kube-api-access-r8cpx") pod "606e5e49-0a85-4337-8aa2-12216467367e" (UID: "606e5e49-0a85-4337-8aa2-12216467367e"). InnerVolumeSpecName "kube-api-access-r8cpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.529907 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-scripts" (OuterVolumeSpecName: "scripts") pod "606e5e49-0a85-4337-8aa2-12216467367e" (UID: "606e5e49-0a85-4337-8aa2-12216467367e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.546817 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "606e5e49-0a85-4337-8aa2-12216467367e" (UID: "606e5e49-0a85-4337-8aa2-12216467367e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.550153 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-config-data" (OuterVolumeSpecName: "config-data") pod "606e5e49-0a85-4337-8aa2-12216467367e" (UID: "606e5e49-0a85-4337-8aa2-12216467367e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.622961 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.623001 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8cpx\" (UniqueName: \"kubernetes.io/projected/606e5e49-0a85-4337-8aa2-12216467367e-kube-api-access-r8cpx\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.623013 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.623023 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606e5e49-0a85-4337-8aa2-12216467367e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.924754 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9"] Jan 22 09:30:00 crc kubenswrapper[4892]: W0122 09:30:00.930107 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a7e907_5a2d_4c67_939b_c27548a17903.slice/crio-5bc1d9f6269119a108b5cae146e6343a560355d64c4190f2054cf6578101e1dd WatchSource:0}: Error finding container 5bc1d9f6269119a108b5cae146e6343a560355d64c4190f2054cf6578101e1dd: Status 404 returned error can't find the container with id 5bc1d9f6269119a108b5cae146e6343a560355d64c4190f2054cf6578101e1dd Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.936719 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" event={"ID":"606e5e49-0a85-4337-8aa2-12216467367e","Type":"ContainerDied","Data":"5b984d68e35a84ece0c196369334568bbbfca99391ad835a2e31c4b537136f4a"} Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.936761 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b984d68e35a84ece0c196369334568bbbfca99391ad835a2e31c4b537136f4a" Jan 22 09:30:00 crc kubenswrapper[4892]: I0122 09:30:00.936806 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hwvgb" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.043913 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 09:30:01 crc kubenswrapper[4892]: E0122 09:30:01.044820 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606e5e49-0a85-4337-8aa2-12216467367e" containerName="nova-cell0-conductor-db-sync" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.044842 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="606e5e49-0a85-4337-8aa2-12216467367e" containerName="nova-cell0-conductor-db-sync" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.045156 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="606e5e49-0a85-4337-8aa2-12216467367e" containerName="nova-cell0-conductor-db-sync" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.046091 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.048431 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w6rft" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.048753 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.054020 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.132687 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzplg\" (UniqueName: \"kubernetes.io/projected/67407100-b6b3-4802-9bb1-337db9cbb3e6-kube-api-access-qzplg\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.132729 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67407100-b6b3-4802-9bb1-337db9cbb3e6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.132867 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67407100-b6b3-4802-9bb1-337db9cbb3e6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.234510 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67407100-b6b3-4802-9bb1-337db9cbb3e6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.234548 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzplg\" (UniqueName: \"kubernetes.io/projected/67407100-b6b3-4802-9bb1-337db9cbb3e6-kube-api-access-qzplg\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.234638 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67407100-b6b3-4802-9bb1-337db9cbb3e6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.238846 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67407100-b6b3-4802-9bb1-337db9cbb3e6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.239809 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67407100-b6b3-4802-9bb1-337db9cbb3e6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.252197 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzplg\" (UniqueName: \"kubernetes.io/projected/67407100-b6b3-4802-9bb1-337db9cbb3e6-kube-api-access-qzplg\") pod \"nova-cell0-conductor-0\" (UID: \"67407100-b6b3-4802-9bb1-337db9cbb3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.404621 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.853798 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 09:30:01 crc kubenswrapper[4892]: W0122 09:30:01.855475 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67407100_b6b3_4802_9bb1_337db9cbb3e6.slice/crio-1deff68f20df11f1b248ee8e44a2342921582c7cc1701f4c6b4cd3739b91f132 WatchSource:0}: Error finding container 1deff68f20df11f1b248ee8e44a2342921582c7cc1701f4c6b4cd3739b91f132: Status 404 returned error can't find the container with id 1deff68f20df11f1b248ee8e44a2342921582c7cc1701f4c6b4cd3739b91f132 Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.949384 4892 generic.go:334] "Generic (PLEG): container finished" podID="69a7e907-5a2d-4c67-939b-c27548a17903" containerID="98b85a9e524dd176269e69c674cdb3658f15b63d24bce2c41933041a87fa6ba0" exitCode=0 Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.949918 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" event={"ID":"69a7e907-5a2d-4c67-939b-c27548a17903","Type":"ContainerDied","Data":"98b85a9e524dd176269e69c674cdb3658f15b63d24bce2c41933041a87fa6ba0"} Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.949957 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" event={"ID":"69a7e907-5a2d-4c67-939b-c27548a17903","Type":"ContainerStarted","Data":"5bc1d9f6269119a108b5cae146e6343a560355d64c4190f2054cf6578101e1dd"} Jan 22 09:30:01 crc kubenswrapper[4892]: I0122 09:30:01.952733 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"67407100-b6b3-4802-9bb1-337db9cbb3e6","Type":"ContainerStarted","Data":"1deff68f20df11f1b248ee8e44a2342921582c7cc1701f4c6b4cd3739b91f132"} Jan 22 09:30:02 crc kubenswrapper[4892]: I0122 09:30:02.961745 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"67407100-b6b3-4802-9bb1-337db9cbb3e6","Type":"ContainerStarted","Data":"9146df2c21cb9b1878b66aed850d1da84df274f80ae6de3d4deee81fbecfc578"} Jan 22 09:30:02 crc kubenswrapper[4892]: I0122 09:30:02.962420 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:02 crc kubenswrapper[4892]: I0122 09:30:02.992368 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.99235235 podStartE2EDuration="1.99235235s" podCreationTimestamp="2026-01-22 09:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:02.990479554 +0000 UTC m=+1172.834558617" watchObservedRunningTime="2026-01-22 09:30:02.99235235 +0000 UTC m=+1172.836431413" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.297986 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.373379 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69a7e907-5a2d-4c67-939b-c27548a17903-secret-volume\") pod \"69a7e907-5a2d-4c67-939b-c27548a17903\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.373471 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a7e907-5a2d-4c67-939b-c27548a17903-config-volume\") pod \"69a7e907-5a2d-4c67-939b-c27548a17903\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.373499 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqrzw\" (UniqueName: \"kubernetes.io/projected/69a7e907-5a2d-4c67-939b-c27548a17903-kube-api-access-xqrzw\") pod \"69a7e907-5a2d-4c67-939b-c27548a17903\" (UID: \"69a7e907-5a2d-4c67-939b-c27548a17903\") " Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.375349 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a7e907-5a2d-4c67-939b-c27548a17903-config-volume" (OuterVolumeSpecName: "config-volume") pod "69a7e907-5a2d-4c67-939b-c27548a17903" (UID: "69a7e907-5a2d-4c67-939b-c27548a17903"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.380647 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a7e907-5a2d-4c67-939b-c27548a17903-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69a7e907-5a2d-4c67-939b-c27548a17903" (UID: "69a7e907-5a2d-4c67-939b-c27548a17903"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.380707 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a7e907-5a2d-4c67-939b-c27548a17903-kube-api-access-xqrzw" (OuterVolumeSpecName: "kube-api-access-xqrzw") pod "69a7e907-5a2d-4c67-939b-c27548a17903" (UID: "69a7e907-5a2d-4c67-939b-c27548a17903"). InnerVolumeSpecName "kube-api-access-xqrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.475429 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69a7e907-5a2d-4c67-939b-c27548a17903-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.475454 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a7e907-5a2d-4c67-939b-c27548a17903-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.475465 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqrzw\" (UniqueName: \"kubernetes.io/projected/69a7e907-5a2d-4c67-939b-c27548a17903-kube-api-access-xqrzw\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.978883 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" event={"ID":"69a7e907-5a2d-4c67-939b-c27548a17903","Type":"ContainerDied","Data":"5bc1d9f6269119a108b5cae146e6343a560355d64c4190f2054cf6578101e1dd"} Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.978944 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc1d9f6269119a108b5cae146e6343a560355d64c4190f2054cf6578101e1dd" Jan 22 09:30:03 crc kubenswrapper[4892]: I0122 09:30:03.978899 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.444916 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.919660 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kkq2t"] Jan 22 09:30:11 crc kubenswrapper[4892]: E0122 09:30:11.920374 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a7e907-5a2d-4c67-939b-c27548a17903" containerName="collect-profiles" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.920387 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a7e907-5a2d-4c67-939b-c27548a17903" containerName="collect-profiles" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.920548 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a7e907-5a2d-4c67-939b-c27548a17903" containerName="collect-profiles" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.921130 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.934185 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kkq2t"] Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.934927 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 09:30:11 crc kubenswrapper[4892]: I0122 09:30:11.935160 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.051850 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qts\" (UniqueName: \"kubernetes.io/projected/f95183f6-1315-4165-9659-6d1c77f3f9bd-kube-api-access-c9qts\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.051948 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-scripts\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.051999 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-config-data\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.053753 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.123417 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.124735 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.127250 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.132636 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.155581 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9qts\" (UniqueName: \"kubernetes.io/projected/f95183f6-1315-4165-9659-6d1c77f3f9bd-kube-api-access-c9qts\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.155656 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-scripts\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.155688 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-config-data\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.155722 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.162164 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-config-data\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.168627 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-scripts\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.175806 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9qts\" (UniqueName: \"kubernetes.io/projected/f95183f6-1315-4165-9659-6d1c77f3f9bd-kube-api-access-c9qts\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.177214 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kkq2t\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.217201 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.218766 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.224433 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.257978 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7xj\" (UniqueName: \"kubernetes.io/projected/5b2bc62f-ae81-4ab9-81e3-310506ee057a-kube-api-access-4d7xj\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.258301 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-config-data\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.258460 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.259744 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.261678 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363363 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363718 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7xj\" (UniqueName: \"kubernetes.io/projected/5b2bc62f-ae81-4ab9-81e3-310506ee057a-kube-api-access-4d7xj\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363777 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-config-data\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363811 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bcbf858-6b11-4f81-9782-00df8dad36cf-logs\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363850 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8h8r\" (UniqueName: \"kubernetes.io/projected/1bcbf858-6b11-4f81-9782-00df8dad36cf-kube-api-access-t8h8r\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363876 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.363936 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-config-data\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.364250 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.367320 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.371807 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.371811 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-config-data\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.379024 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.409179 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7xj\" (UniqueName: \"kubernetes.io/projected/5b2bc62f-ae81-4ab9-81e3-310506ee057a-kube-api-access-4d7xj\") pod \"nova-scheduler-0\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.419486 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.437771 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467318 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bcbf858-6b11-4f81-9782-00df8dad36cf-logs\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467374 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8h8r\" (UniqueName: \"kubernetes.io/projected/1bcbf858-6b11-4f81-9782-00df8dad36cf-kube-api-access-t8h8r\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467429 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-config-data\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467480 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467501 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7gdc\" (UniqueName: \"kubernetes.io/projected/7ea12e63-6eef-4df6-a9ad-261f657546c3-kube-api-access-b7gdc\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467521 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.467547 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.470103 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bcbf858-6b11-4f81-9782-00df8dad36cf-logs\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.479867 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-config-data\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.480905 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.512359 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-7x292"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.513811 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.552662 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-7x292"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.572284 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574216 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7gdc\" (UniqueName: \"kubernetes.io/projected/7ea12e63-6eef-4df6-a9ad-261f657546c3-kube-api-access-b7gdc\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574244 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574316 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574371 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msh5\" (UniqueName: \"kubernetes.io/projected/310458cb-5d40-4525-a26e-0df3583401c7-kube-api-access-8msh5\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574470 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574509 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-config\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574541 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.574570 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.575219 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8h8r\" (UniqueName: \"kubernetes.io/projected/1bcbf858-6b11-4f81-9782-00df8dad36cf-kube-api-access-t8h8r\") pod \"nova-metadata-0\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.585953 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.587676 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.607838 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.639732 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.665715 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.665810 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.671489 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7gdc\" (UniqueName: \"kubernetes.io/projected/7ea12e63-6eef-4df6-a9ad-261f657546c3-kube-api-access-b7gdc\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.674769 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.676499 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.676545 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msh5\" (UniqueName: \"kubernetes.io/projected/310458cb-5d40-4525-a26e-0df3583401c7-kube-api-access-8msh5\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.676600 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.676628 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-config\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.676651 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.676671 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.677641 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.678171 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.683171 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-config\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.683741 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.684266 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.736116 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msh5\" (UniqueName: \"kubernetes.io/projected/310458cb-5d40-4525-a26e-0df3583401c7-kube-api-access-8msh5\") pod \"dnsmasq-dns-557bbc7df7-7x292\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.755743 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.779400 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrbbr\" (UniqueName: \"kubernetes.io/projected/a11d9dd9-7bad-489e-89c0-76aba2c3a239-kube-api-access-rrbbr\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.779454 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-config-data\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.779504 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.779554 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a11d9dd9-7bad-489e-89c0-76aba2c3a239-logs\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.872024 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.881068 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.881117 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a11d9dd9-7bad-489e-89c0-76aba2c3a239-logs\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.881211 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrbbr\" (UniqueName: \"kubernetes.io/projected/a11d9dd9-7bad-489e-89c0-76aba2c3a239-kube-api-access-rrbbr\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.881241 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-config-data\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.882093 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a11d9dd9-7bad-489e-89c0-76aba2c3a239-logs\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.888613 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.892427 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-config-data\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:12 crc kubenswrapper[4892]: I0122 09:30:12.902198 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrbbr\" (UniqueName: \"kubernetes.io/projected/a11d9dd9-7bad-489e-89c0-76aba2c3a239-kube-api-access-rrbbr\") pod \"nova-api-0\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " pod="openstack/nova-api-0" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.009759 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.037791 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kkq2t"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.078389 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kkq2t" event={"ID":"f95183f6-1315-4165-9659-6d1c77f3f9bd","Type":"ContainerStarted","Data":"e4cc805ca49baf5d050e7fcee831c57947cbff109bc83783ca5c6889a1f6c66f"} Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.202715 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:13 crc kubenswrapper[4892]: W0122 09:30:13.228494 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcbf858_6b11_4f81_9782_00df8dad36cf.slice/crio-c97273d47c2166241e06185b7150e9b0697f779ce3762055219d2a3a144fb6ae WatchSource:0}: Error finding container c97273d47c2166241e06185b7150e9b0697f779ce3762055219d2a3a144fb6ae: Status 404 returned error can't find the container with id c97273d47c2166241e06185b7150e9b0697f779ce3762055219d2a3a144fb6ae Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.252657 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pttx4"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.253804 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.256342 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.263443 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.277933 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pttx4"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.293100 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-config-data\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.293220 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-scripts\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.293277 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4g5\" (UniqueName: \"kubernetes.io/projected/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-kube-api-access-lw4g5\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.293317 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: W0122 09:30:13.316406 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea12e63_6eef_4df6_a9ad_261f657546c3.slice/crio-36604fec81c7268b4811df920f654cd9f67a0340e2ea519e039a00ab8db1a962 WatchSource:0}: Error finding container 36604fec81c7268b4811df920f654cd9f67a0340e2ea519e039a00ab8db1a962: Status 404 returned error can't find the container with id 36604fec81c7268b4811df920f654cd9f67a0340e2ea519e039a00ab8db1a962 Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.325498 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.367484 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.395157 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4g5\" (UniqueName: \"kubernetes.io/projected/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-kube-api-access-lw4g5\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.395211 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.396132 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-config-data\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.396262 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-scripts\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.400338 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-config-data\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.401287 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.409267 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-scripts\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.411736 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4g5\" (UniqueName: \"kubernetes.io/projected/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-kube-api-access-lw4g5\") pod \"nova-cell1-conductor-db-sync-pttx4\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.506806 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-7x292"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.603263 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:13 crc kubenswrapper[4892]: I0122 09:30:13.608749 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.069757 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pttx4"] Jan 22 09:30:14 crc kubenswrapper[4892]: W0122 09:30:14.079203 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af63861_4d6e_46cb_ad5b_0b0161c47cb9.slice/crio-96699f977f76cb2597a7ddba07a142f857dafa92099f97cdb5b3fc72c1bd2d24 WatchSource:0}: Error finding container 96699f977f76cb2597a7ddba07a142f857dafa92099f97cdb5b3fc72c1bd2d24: Status 404 returned error can't find the container with id 96699f977f76cb2597a7ddba07a142f857dafa92099f97cdb5b3fc72c1bd2d24 Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.099056 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ea12e63-6eef-4df6-a9ad-261f657546c3","Type":"ContainerStarted","Data":"36604fec81c7268b4811df920f654cd9f67a0340e2ea519e039a00ab8db1a962"} Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.104125 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kkq2t" event={"ID":"f95183f6-1315-4165-9659-6d1c77f3f9bd","Type":"ContainerStarted","Data":"b2a714e0373a0fd3ac2703eaf46fd10700b0d520c0e832693f53c7a7da1f47ba"} Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.112998 4892 generic.go:334] "Generic (PLEG): container finished" podID="310458cb-5d40-4525-a26e-0df3583401c7" containerID="547e82401fb7ede95e3c2f650d577d3fd954c539b28cb0701064b43980e4d108" exitCode=0 Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.113073 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" event={"ID":"310458cb-5d40-4525-a26e-0df3583401c7","Type":"ContainerDied","Data":"547e82401fb7ede95e3c2f650d577d3fd954c539b28cb0701064b43980e4d108"} Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.113099 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" event={"ID":"310458cb-5d40-4525-a26e-0df3583401c7","Type":"ContainerStarted","Data":"c3d14e81d3e92ba1a9b57eed245d802b6bb5d418ab52a22359355b6327c91d96"} Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.117870 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bcbf858-6b11-4f81-9782-00df8dad36cf","Type":"ContainerStarted","Data":"c97273d47c2166241e06185b7150e9b0697f779ce3762055219d2a3a144fb6ae"} Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.126894 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a11d9dd9-7bad-489e-89c0-76aba2c3a239","Type":"ContainerStarted","Data":"7adc697c985e9148339a7baf516d5b7eaccc5041f5634326253f74b7e9935e1c"} Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.129777 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kkq2t" podStartSLOduration=3.129756166 podStartE2EDuration="3.129756166s" podCreationTimestamp="2026-01-22 09:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:14.121524596 +0000 UTC m=+1183.965603659" watchObservedRunningTime="2026-01-22 09:30:14.129756166 +0000 UTC m=+1183.973835229" Jan 22 09:30:14 crc kubenswrapper[4892]: I0122 09:30:14.138621 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b2bc62f-ae81-4ab9-81e3-310506ee057a","Type":"ContainerStarted","Data":"661d3a6ddb67ffbc2d39038ee684f7bc899ca628524c0895645d98d6a1a335bc"} Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.151297 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" event={"ID":"310458cb-5d40-4525-a26e-0df3583401c7","Type":"ContainerStarted","Data":"a8f0efec2021aad932c80bd5e1218c2ffd6057a968f52efde6353b3fa3e1ac52"} Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.152549 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.155621 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pttx4" event={"ID":"9af63861-4d6e-46cb-ad5b-0b0161c47cb9","Type":"ContainerStarted","Data":"901500cbf1262ad90b04e55b8170f2240c2451a2c5587e3d938ad7894b2a31b7"} Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.155813 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pttx4" event={"ID":"9af63861-4d6e-46cb-ad5b-0b0161c47cb9","Type":"ContainerStarted","Data":"96699f977f76cb2597a7ddba07a142f857dafa92099f97cdb5b3fc72c1bd2d24"} Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.172486 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" podStartSLOduration=3.172468776 podStartE2EDuration="3.172468776s" podCreationTimestamp="2026-01-22 09:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:15.167992717 +0000 UTC m=+1185.012071780" watchObservedRunningTime="2026-01-22 09:30:15.172468776 +0000 UTC m=+1185.016547839" Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.201167 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pttx4" podStartSLOduration=2.201149052 podStartE2EDuration="2.201149052s" podCreationTimestamp="2026-01-22 09:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:15.197540535 +0000 UTC m=+1185.041619598" watchObservedRunningTime="2026-01-22 09:30:15.201149052 +0000 UTC m=+1185.045228115" Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.857145 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:15 crc kubenswrapper[4892]: I0122 09:30:15.888670 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:16 crc kubenswrapper[4892]: I0122 09:30:16.324348 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:30:16 crc kubenswrapper[4892]: I0122 09:30:16.324427 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.181750 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b2bc62f-ae81-4ab9-81e3-310506ee057a","Type":"ContainerStarted","Data":"a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f"} Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.183887 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ea12e63-6eef-4df6-a9ad-261f657546c3","Type":"ContainerStarted","Data":"10a66b212df198e0f5926a20f8fd18efd6d09099bad778a4d4347c197076cf10"} Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.184033 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7ea12e63-6eef-4df6-a9ad-261f657546c3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://10a66b212df198e0f5926a20f8fd18efd6d09099bad778a4d4347c197076cf10" gracePeriod=30 Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.187851 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bcbf858-6b11-4f81-9782-00df8dad36cf","Type":"ContainerStarted","Data":"888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6"} Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.187890 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bcbf858-6b11-4f81-9782-00df8dad36cf","Type":"ContainerStarted","Data":"3ad9e55a1b9cca0bc197f2971bc76ad158d8a8e368a502f274c7d4b87462235c"} Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.187991 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-log" containerID="cri-o://3ad9e55a1b9cca0bc197f2971bc76ad158d8a8e368a502f274c7d4b87462235c" gracePeriod=30 Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.188084 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-metadata" containerID="cri-o://888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6" gracePeriod=30 Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.196312 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a11d9dd9-7bad-489e-89c0-76aba2c3a239","Type":"ContainerStarted","Data":"f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703"} Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.196349 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a11d9dd9-7bad-489e-89c0-76aba2c3a239","Type":"ContainerStarted","Data":"52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba"} Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.215396 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.141129773 podStartE2EDuration="5.215372253s" podCreationTimestamp="2026-01-22 09:30:12 +0000 UTC" firstStartedPulling="2026-01-22 09:30:13.366426209 +0000 UTC m=+1183.210505272" lastFinishedPulling="2026-01-22 09:30:16.440668689 +0000 UTC m=+1186.284747752" observedRunningTime="2026-01-22 09:30:17.201913926 +0000 UTC m=+1187.045992989" watchObservedRunningTime="2026-01-22 09:30:17.215372253 +0000 UTC m=+1187.059451346" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.266586 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.19088697 podStartE2EDuration="5.266569155s" podCreationTimestamp="2026-01-22 09:30:12 +0000 UTC" firstStartedPulling="2026-01-22 09:30:13.362518684 +0000 UTC m=+1183.206597747" lastFinishedPulling="2026-01-22 09:30:16.438200869 +0000 UTC m=+1186.282279932" observedRunningTime="2026-01-22 09:30:17.240634376 +0000 UTC m=+1187.084713449" watchObservedRunningTime="2026-01-22 09:30:17.266569155 +0000 UTC m=+1187.110648218" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.270104 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.429762697 podStartE2EDuration="5.270097871s" podCreationTimestamp="2026-01-22 09:30:12 +0000 UTC" firstStartedPulling="2026-01-22 09:30:13.60547084 +0000 UTC m=+1183.449549903" lastFinishedPulling="2026-01-22 09:30:16.445806014 +0000 UTC m=+1186.289885077" observedRunningTime="2026-01-22 09:30:17.262899866 +0000 UTC m=+1187.106978929" watchObservedRunningTime="2026-01-22 09:30:17.270097871 +0000 UTC m=+1187.114176924" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.280100 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.086820995 podStartE2EDuration="5.280089684s" podCreationTimestamp="2026-01-22 09:30:12 +0000 UTC" firstStartedPulling="2026-01-22 09:30:13.251779216 +0000 UTC m=+1183.095858279" lastFinishedPulling="2026-01-22 09:30:16.445047905 +0000 UTC m=+1186.289126968" observedRunningTime="2026-01-22 09:30:17.2782792 +0000 UTC m=+1187.122358263" watchObservedRunningTime="2026-01-22 09:30:17.280089684 +0000 UTC m=+1187.124168747" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.438236 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.640359 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.641539 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:30:17 crc kubenswrapper[4892]: I0122 09:30:17.757044 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:18 crc kubenswrapper[4892]: I0122 09:30:18.205543 4892 generic.go:334] "Generic (PLEG): container finished" podID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerID="3ad9e55a1b9cca0bc197f2971bc76ad158d8a8e368a502f274c7d4b87462235c" exitCode=143 Jan 22 09:30:18 crc kubenswrapper[4892]: I0122 09:30:18.205596 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bcbf858-6b11-4f81-9782-00df8dad36cf","Type":"ContainerDied","Data":"3ad9e55a1b9cca0bc197f2971bc76ad158d8a8e368a502f274c7d4b87462235c"} Jan 22 09:30:21 crc kubenswrapper[4892]: I0122 09:30:21.210525 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 09:30:21 crc kubenswrapper[4892]: I0122 09:30:21.234361 4892 generic.go:334] "Generic (PLEG): container finished" podID="f95183f6-1315-4165-9659-6d1c77f3f9bd" containerID="b2a714e0373a0fd3ac2703eaf46fd10700b0d520c0e832693f53c7a7da1f47ba" exitCode=0 Jan 22 09:30:21 crc kubenswrapper[4892]: I0122 09:30:21.234421 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kkq2t" event={"ID":"f95183f6-1315-4165-9659-6d1c77f3f9bd","Type":"ContainerDied","Data":"b2a714e0373a0fd3ac2703eaf46fd10700b0d520c0e832693f53c7a7da1f47ba"} Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.253926 4892 generic.go:334] "Generic (PLEG): container finished" podID="9af63861-4d6e-46cb-ad5b-0b0161c47cb9" containerID="901500cbf1262ad90b04e55b8170f2240c2451a2c5587e3d938ad7894b2a31b7" exitCode=0 Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.253992 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pttx4" event={"ID":"9af63861-4d6e-46cb-ad5b-0b0161c47cb9","Type":"ContainerDied","Data":"901500cbf1262ad90b04e55b8170f2240c2451a2c5587e3d938ad7894b2a31b7"} Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.438826 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.476640 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.644680 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.706405 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-combined-ca-bundle\") pod \"f95183f6-1315-4165-9659-6d1c77f3f9bd\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.706451 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9qts\" (UniqueName: \"kubernetes.io/projected/f95183f6-1315-4165-9659-6d1c77f3f9bd-kube-api-access-c9qts\") pod \"f95183f6-1315-4165-9659-6d1c77f3f9bd\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.706492 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-scripts\") pod \"f95183f6-1315-4165-9659-6d1c77f3f9bd\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.706527 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-config-data\") pod \"f95183f6-1315-4165-9659-6d1c77f3f9bd\" (UID: \"f95183f6-1315-4165-9659-6d1c77f3f9bd\") " Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.712357 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95183f6-1315-4165-9659-6d1c77f3f9bd-kube-api-access-c9qts" (OuterVolumeSpecName: "kube-api-access-c9qts") pod "f95183f6-1315-4165-9659-6d1c77f3f9bd" (UID: "f95183f6-1315-4165-9659-6d1c77f3f9bd"). InnerVolumeSpecName "kube-api-access-c9qts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.713450 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-scripts" (OuterVolumeSpecName: "scripts") pod "f95183f6-1315-4165-9659-6d1c77f3f9bd" (UID: "f95183f6-1315-4165-9659-6d1c77f3f9bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.733777 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f95183f6-1315-4165-9659-6d1c77f3f9bd" (UID: "f95183f6-1315-4165-9659-6d1c77f3f9bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.738791 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-config-data" (OuterVolumeSpecName: "config-data") pod "f95183f6-1315-4165-9659-6d1c77f3f9bd" (UID: "f95183f6-1315-4165-9659-6d1c77f3f9bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.809245 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.809284 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9qts\" (UniqueName: \"kubernetes.io/projected/f95183f6-1315-4165-9659-6d1c77f3f9bd-kube-api-access-c9qts\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.809313 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.809323 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95183f6-1315-4165-9659-6d1c77f3f9bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.874663 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.935977 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-fqr94"] Jan 22 09:30:22 crc kubenswrapper[4892]: I0122 09:30:22.936371 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" podUID="a4609a5c-a8a3-4516-82db-e66273379720" containerName="dnsmasq-dns" containerID="cri-o://4a1fbb875a7cbe42c04e4f5d1d22c48f403b74ffbf539eba43bf3ea73e161559" gracePeriod=10 Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.011061 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.011439 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.264475 4892 generic.go:334] "Generic (PLEG): container finished" podID="a4609a5c-a8a3-4516-82db-e66273379720" containerID="4a1fbb875a7cbe42c04e4f5d1d22c48f403b74ffbf539eba43bf3ea73e161559" exitCode=0 Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.264528 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" event={"ID":"a4609a5c-a8a3-4516-82db-e66273379720","Type":"ContainerDied","Data":"4a1fbb875a7cbe42c04e4f5d1d22c48f403b74ffbf539eba43bf3ea73e161559"} Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.266615 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kkq2t" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.267407 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kkq2t" event={"ID":"f95183f6-1315-4165-9659-6d1c77f3f9bd","Type":"ContainerDied","Data":"e4cc805ca49baf5d050e7fcee831c57947cbff109bc83783ca5c6889a1f6c66f"} Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.267450 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4cc805ca49baf5d050e7fcee831c57947cbff109bc83783ca5c6889a1f6c66f" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.309909 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.358358 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.420823 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-sb\") pod \"a4609a5c-a8a3-4516-82db-e66273379720\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.420943 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9tvg\" (UniqueName: \"kubernetes.io/projected/a4609a5c-a8a3-4516-82db-e66273379720-kube-api-access-p9tvg\") pod \"a4609a5c-a8a3-4516-82db-e66273379720\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.420978 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-swift-storage-0\") pod \"a4609a5c-a8a3-4516-82db-e66273379720\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.421018 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-nb\") pod \"a4609a5c-a8a3-4516-82db-e66273379720\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.421072 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-config\") pod \"a4609a5c-a8a3-4516-82db-e66273379720\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.421102 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-svc\") pod \"a4609a5c-a8a3-4516-82db-e66273379720\" (UID: \"a4609a5c-a8a3-4516-82db-e66273379720\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.433716 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4609a5c-a8a3-4516-82db-e66273379720-kube-api-access-p9tvg" (OuterVolumeSpecName: "kube-api-access-p9tvg") pod "a4609a5c-a8a3-4516-82db-e66273379720" (UID: "a4609a5c-a8a3-4516-82db-e66273379720"). InnerVolumeSpecName "kube-api-access-p9tvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.465748 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.465932 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-log" containerID="cri-o://52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba" gracePeriod=30 Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.466185 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-api" containerID="cri-o://f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703" gracePeriod=30 Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.482023 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": EOF" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.482414 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": EOF" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.513892 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4609a5c-a8a3-4516-82db-e66273379720" (UID: "a4609a5c-a8a3-4516-82db-e66273379720"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.517155 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4609a5c-a8a3-4516-82db-e66273379720" (UID: "a4609a5c-a8a3-4516-82db-e66273379720"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.540378 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.540419 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9tvg\" (UniqueName: \"kubernetes.io/projected/a4609a5c-a8a3-4516-82db-e66273379720-kube-api-access-p9tvg\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.540434 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.544835 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4609a5c-a8a3-4516-82db-e66273379720" (UID: "a4609a5c-a8a3-4516-82db-e66273379720"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.545082 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-config" (OuterVolumeSpecName: "config") pod "a4609a5c-a8a3-4516-82db-e66273379720" (UID: "a4609a5c-a8a3-4516-82db-e66273379720"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.610265 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4609a5c-a8a3-4516-82db-e66273379720" (UID: "a4609a5c-a8a3-4516-82db-e66273379720"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.641988 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.642016 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.642025 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4609a5c-a8a3-4516-82db-e66273379720-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.684511 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.742734 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-scripts\") pod \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.742772 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-config-data\") pod \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.742819 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-combined-ca-bundle\") pod \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.743050 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4g5\" (UniqueName: \"kubernetes.io/projected/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-kube-api-access-lw4g5\") pod \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\" (UID: \"9af63861-4d6e-46cb-ad5b-0b0161c47cb9\") " Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.751225 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-scripts" (OuterVolumeSpecName: "scripts") pod "9af63861-4d6e-46cb-ad5b-0b0161c47cb9" (UID: "9af63861-4d6e-46cb-ad5b-0b0161c47cb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.755446 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-kube-api-access-lw4g5" (OuterVolumeSpecName: "kube-api-access-lw4g5") pod "9af63861-4d6e-46cb-ad5b-0b0161c47cb9" (UID: "9af63861-4d6e-46cb-ad5b-0b0161c47cb9"). InnerVolumeSpecName "kube-api-access-lw4g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.783653 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-config-data" (OuterVolumeSpecName: "config-data") pod "9af63861-4d6e-46cb-ad5b-0b0161c47cb9" (UID: "9af63861-4d6e-46cb-ad5b-0b0161c47cb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.783856 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af63861-4d6e-46cb-ad5b-0b0161c47cb9" (UID: "9af63861-4d6e-46cb-ad5b-0b0161c47cb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.806860 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.844641 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4g5\" (UniqueName: \"kubernetes.io/projected/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-kube-api-access-lw4g5\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.844864 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.844875 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:23 crc kubenswrapper[4892]: I0122 09:30:23.844886 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af63861-4d6e-46cb-ad5b-0b0161c47cb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.276781 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pttx4" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.276935 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pttx4" event={"ID":"9af63861-4d6e-46cb-ad5b-0b0161c47cb9","Type":"ContainerDied","Data":"96699f977f76cb2597a7ddba07a142f857dafa92099f97cdb5b3fc72c1bd2d24"} Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.277585 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96699f977f76cb2597a7ddba07a142f857dafa92099f97cdb5b3fc72c1bd2d24" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.283600 4892 generic.go:334] "Generic (PLEG): container finished" podID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerID="52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba" exitCode=143 Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.283664 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a11d9dd9-7bad-489e-89c0-76aba2c3a239","Type":"ContainerDied","Data":"52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba"} Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.286270 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.295422 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-fqr94" event={"ID":"a4609a5c-a8a3-4516-82db-e66273379720","Type":"ContainerDied","Data":"8c0476b1a012178464a2103d8f1a4e34b78d4532f74d901dae93e71f7a916a92"} Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.295530 4892 scope.go:117] "RemoveContainer" containerID="4a1fbb875a7cbe42c04e4f5d1d22c48f403b74ffbf539eba43bf3ea73e161559" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.343379 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-fqr94"] Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.353798 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-fqr94"] Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.356992 4892 scope.go:117] "RemoveContainer" containerID="b7106e4c2c0db2d87d4e3fd49f9f61d74404cfdb7c70660512ad56fc51b40e77" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.384345 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 09:30:24 crc kubenswrapper[4892]: E0122 09:30:24.384931 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95183f6-1315-4165-9659-6d1c77f3f9bd" containerName="nova-manage" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385024 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95183f6-1315-4165-9659-6d1c77f3f9bd" containerName="nova-manage" Jan 22 09:30:24 crc kubenswrapper[4892]: E0122 09:30:24.385095 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af63861-4d6e-46cb-ad5b-0b0161c47cb9" containerName="nova-cell1-conductor-db-sync" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385148 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af63861-4d6e-46cb-ad5b-0b0161c47cb9" containerName="nova-cell1-conductor-db-sync" Jan 22 09:30:24 crc kubenswrapper[4892]: E0122 09:30:24.385204 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4609a5c-a8a3-4516-82db-e66273379720" containerName="dnsmasq-dns" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385315 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4609a5c-a8a3-4516-82db-e66273379720" containerName="dnsmasq-dns" Jan 22 09:30:24 crc kubenswrapper[4892]: E0122 09:30:24.385392 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4609a5c-a8a3-4516-82db-e66273379720" containerName="init" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385449 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4609a5c-a8a3-4516-82db-e66273379720" containerName="init" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385657 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4609a5c-a8a3-4516-82db-e66273379720" containerName="dnsmasq-dns" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385722 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95183f6-1315-4165-9659-6d1c77f3f9bd" containerName="nova-manage" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.385795 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af63861-4d6e-46cb-ad5b-0b0161c47cb9" containerName="nova-cell1-conductor-db-sync" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.386440 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.392526 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.395093 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.458025 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4bc\" (UniqueName: \"kubernetes.io/projected/c1860910-1d6f-45fc-b0ce-7aef22083de7-kube-api-access-tq4bc\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.458993 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1860910-1d6f-45fc-b0ce-7aef22083de7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.459130 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1860910-1d6f-45fc-b0ce-7aef22083de7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.561084 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq4bc\" (UniqueName: \"kubernetes.io/projected/c1860910-1d6f-45fc-b0ce-7aef22083de7-kube-api-access-tq4bc\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.561138 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1860910-1d6f-45fc-b0ce-7aef22083de7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.562002 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1860910-1d6f-45fc-b0ce-7aef22083de7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.566946 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1860910-1d6f-45fc-b0ce-7aef22083de7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.567111 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1860910-1d6f-45fc-b0ce-7aef22083de7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.577549 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq4bc\" (UniqueName: \"kubernetes.io/projected/c1860910-1d6f-45fc-b0ce-7aef22083de7-kube-api-access-tq4bc\") pod \"nova-cell1-conductor-0\" (UID: \"c1860910-1d6f-45fc-b0ce-7aef22083de7\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:24 crc kubenswrapper[4892]: I0122 09:30:24.715521 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.252415 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.311902 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c1860910-1d6f-45fc-b0ce-7aef22083de7","Type":"ContainerStarted","Data":"28e17cc2c5094b85a40a5ea9eab8536368edf720b92422af6aad3f0e19db2a23"} Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.315464 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" containerName="nova-scheduler-scheduler" containerID="cri-o://a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f" gracePeriod=30 Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.390912 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.391335 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="192369ce-10a4-47fb-9813-94de83265f37" containerName="kube-state-metrics" containerID="cri-o://91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02" gracePeriod=30 Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.432012 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4609a5c-a8a3-4516-82db-e66273379720" path="/var/lib/kubelet/pods/a4609a5c-a8a3-4516-82db-e66273379720/volumes" Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.850159 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.908977 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxvz\" (UniqueName: \"kubernetes.io/projected/192369ce-10a4-47fb-9813-94de83265f37-kube-api-access-6jxvz\") pod \"192369ce-10a4-47fb-9813-94de83265f37\" (UID: \"192369ce-10a4-47fb-9813-94de83265f37\") " Jan 22 09:30:25 crc kubenswrapper[4892]: I0122 09:30:25.929316 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192369ce-10a4-47fb-9813-94de83265f37-kube-api-access-6jxvz" (OuterVolumeSpecName: "kube-api-access-6jxvz") pod "192369ce-10a4-47fb-9813-94de83265f37" (UID: "192369ce-10a4-47fb-9813-94de83265f37"). InnerVolumeSpecName "kube-api-access-6jxvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.011567 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxvz\" (UniqueName: \"kubernetes.io/projected/192369ce-10a4-47fb-9813-94de83265f37-kube-api-access-6jxvz\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.324408 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c1860910-1d6f-45fc-b0ce-7aef22083de7","Type":"ContainerStarted","Data":"b108b41cc59e4cc42ba5d1213e641c7448c8ed378ee85c7a5af75743199653cf"} Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.324548 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.325994 4892 generic.go:334] "Generic (PLEG): container finished" podID="192369ce-10a4-47fb-9813-94de83265f37" containerID="91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02" exitCode=2 Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.326038 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"192369ce-10a4-47fb-9813-94de83265f37","Type":"ContainerDied","Data":"91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02"} Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.326063 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"192369ce-10a4-47fb-9813-94de83265f37","Type":"ContainerDied","Data":"4ca67edbbbc50aa3f2c735ee9c250b60a2105b24e3e59873b1335b0b3a6f7620"} Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.326079 4892 scope.go:117] "RemoveContainer" containerID="91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.326221 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.344777 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.344754639 podStartE2EDuration="2.344754639s" podCreationTimestamp="2026-01-22 09:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:26.339938362 +0000 UTC m=+1196.184017435" watchObservedRunningTime="2026-01-22 09:30:26.344754639 +0000 UTC m=+1196.188833702" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.374354 4892 scope.go:117] "RemoveContainer" containerID="91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02" Jan 22 09:30:26 crc kubenswrapper[4892]: E0122 09:30:26.375454 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02\": container with ID starting with 91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02 not found: ID does not exist" containerID="91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.375481 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02"} err="failed to get container status \"91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02\": rpc error: code = NotFound desc = could not find container \"91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02\": container with ID starting with 91dd86529dd751c0ed5e3fc507e982dea16e91ebd306737380337361da42eb02 not found: ID does not exist" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.379388 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.386326 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.397652 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:30:26 crc kubenswrapper[4892]: E0122 09:30:26.398014 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192369ce-10a4-47fb-9813-94de83265f37" containerName="kube-state-metrics" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.398032 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="192369ce-10a4-47fb-9813-94de83265f37" containerName="kube-state-metrics" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.398205 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="192369ce-10a4-47fb-9813-94de83265f37" containerName="kube-state-metrics" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.398795 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.401740 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.402333 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.405304 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.520672 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k746\" (UniqueName: \"kubernetes.io/projected/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-api-access-8k746\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.521099 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.521155 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.521214 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.623597 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.623693 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.623756 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.623818 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k746\" (UniqueName: \"kubernetes.io/projected/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-api-access-8k746\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.631903 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.631978 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.632013 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6948adf9-b332-4b21-82e2-444fc998ebe5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.641938 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k746\" (UniqueName: \"kubernetes.io/projected/6948adf9-b332-4b21-82e2-444fc998ebe5-kube-api-access-8k746\") pod \"kube-state-metrics-0\" (UID: \"6948adf9-b332-4b21-82e2-444fc998ebe5\") " pod="openstack/kube-state-metrics-0" Jan 22 09:30:26 crc kubenswrapper[4892]: I0122 09:30:26.726726 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:30:27 crc kubenswrapper[4892]: W0122 09:30:27.183436 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6948adf9_b332_4b21_82e2_444fc998ebe5.slice/crio-48a62754ba320b96badff42e0689bf9df55786e67ae17decc8fff36daabbf5a1 WatchSource:0}: Error finding container 48a62754ba320b96badff42e0689bf9df55786e67ae17decc8fff36daabbf5a1: Status 404 returned error can't find the container with id 48a62754ba320b96badff42e0689bf9df55786e67ae17decc8fff36daabbf5a1 Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.183748 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.187911 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.196506 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.196809 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-central-agent" containerID="cri-o://ab01d620c9250b713d766214a10e344eec5cee81aff018cfc5569bc384782257" gracePeriod=30 Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.197340 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="proxy-httpd" containerID="cri-o://6c3f030b07aaf1ba33c787866c6674bb9383137538173f28af52c6888e5ef677" gracePeriod=30 Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.197399 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="sg-core" containerID="cri-o://61b5fb18f1d0ea77e2a6f67803bf1ae56e54123ce84b8a3497473c386a59e3b6" gracePeriod=30 Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.197445 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-notification-agent" containerID="cri-o://9b20bcd741a6581cfe6e10b52023336281a8eaaca91c9cdfe5c436015e86d937" gracePeriod=30 Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.339578 4892 generic.go:334] "Generic (PLEG): container finished" podID="a52996eb-0df4-4f58-9f44-186a19c25555" containerID="61b5fb18f1d0ea77e2a6f67803bf1ae56e54123ce84b8a3497473c386a59e3b6" exitCode=2 Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.339624 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerDied","Data":"61b5fb18f1d0ea77e2a6f67803bf1ae56e54123ce84b8a3497473c386a59e3b6"} Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.341824 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6948adf9-b332-4b21-82e2-444fc998ebe5","Type":"ContainerStarted","Data":"48a62754ba320b96badff42e0689bf9df55786e67ae17decc8fff36daabbf5a1"} Jan 22 09:30:27 crc kubenswrapper[4892]: I0122 09:30:27.430481 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192369ce-10a4-47fb-9813-94de83265f37" path="/var/lib/kubelet/pods/192369ce-10a4-47fb-9813-94de83265f37/volumes" Jan 22 09:30:27 crc kubenswrapper[4892]: E0122 09:30:27.439778 4892 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:30:27 crc kubenswrapper[4892]: E0122 09:30:27.441962 4892 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:30:27 crc kubenswrapper[4892]: E0122 09:30:27.443362 4892 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:30:27 crc kubenswrapper[4892]: E0122 09:30:27.443394 4892 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" containerName="nova-scheduler-scheduler" Jan 22 09:30:28 crc kubenswrapper[4892]: I0122 09:30:28.357265 4892 generic.go:334] "Generic (PLEG): container finished" podID="a52996eb-0df4-4f58-9f44-186a19c25555" containerID="6c3f030b07aaf1ba33c787866c6674bb9383137538173f28af52c6888e5ef677" exitCode=0 Jan 22 09:30:28 crc kubenswrapper[4892]: I0122 09:30:28.357534 4892 generic.go:334] "Generic (PLEG): container finished" podID="a52996eb-0df4-4f58-9f44-186a19c25555" containerID="ab01d620c9250b713d766214a10e344eec5cee81aff018cfc5569bc384782257" exitCode=0 Jan 22 09:30:28 crc kubenswrapper[4892]: I0122 09:30:28.357346 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerDied","Data":"6c3f030b07aaf1ba33c787866c6674bb9383137538173f28af52c6888e5ef677"} Jan 22 09:30:28 crc kubenswrapper[4892]: I0122 09:30:28.357569 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerDied","Data":"ab01d620c9250b713d766214a10e344eec5cee81aff018cfc5569bc384782257"} Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.370343 4892 generic.go:334] "Generic (PLEG): container finished" podID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" containerID="a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f" exitCode=0 Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.370494 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b2bc62f-ae81-4ab9-81e3-310506ee057a","Type":"ContainerDied","Data":"a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f"} Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.370856 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b2bc62f-ae81-4ab9-81e3-310506ee057a","Type":"ContainerDied","Data":"661d3a6ddb67ffbc2d39038ee684f7bc899ca628524c0895645d98d6a1a335bc"} Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.370873 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="661d3a6ddb67ffbc2d39038ee684f7bc899ca628524c0895645d98d6a1a335bc" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.374937 4892 generic.go:334] "Generic (PLEG): container finished" podID="a52996eb-0df4-4f58-9f44-186a19c25555" containerID="9b20bcd741a6581cfe6e10b52023336281a8eaaca91c9cdfe5c436015e86d937" exitCode=0 Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.374986 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerDied","Data":"9b20bcd741a6581cfe6e10b52023336281a8eaaca91c9cdfe5c436015e86d937"} Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.376988 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6948adf9-b332-4b21-82e2-444fc998ebe5","Type":"ContainerStarted","Data":"152ceeb2674bdd101d64c6691681221e87eeacf910c7481e8e9c3ee643cd3ac5"} Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.378193 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.397130 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.646667623 podStartE2EDuration="3.397113618s" podCreationTimestamp="2026-01-22 09:30:26 +0000 UTC" firstStartedPulling="2026-01-22 09:30:27.187593067 +0000 UTC m=+1197.031672130" lastFinishedPulling="2026-01-22 09:30:27.938039052 +0000 UTC m=+1197.782118125" observedRunningTime="2026-01-22 09:30:29.394808862 +0000 UTC m=+1199.238887925" watchObservedRunningTime="2026-01-22 09:30:29.397113618 +0000 UTC m=+1199.241192681" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.403261 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.479676 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-combined-ca-bundle\") pod \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.479794 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-config-data\") pod \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.479820 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7xj\" (UniqueName: \"kubernetes.io/projected/5b2bc62f-ae81-4ab9-81e3-310506ee057a-kube-api-access-4d7xj\") pod \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\" (UID: \"5b2bc62f-ae81-4ab9-81e3-310506ee057a\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.488441 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2bc62f-ae81-4ab9-81e3-310506ee057a-kube-api-access-4d7xj" (OuterVolumeSpecName: "kube-api-access-4d7xj") pod "5b2bc62f-ae81-4ab9-81e3-310506ee057a" (UID: "5b2bc62f-ae81-4ab9-81e3-310506ee057a"). InnerVolumeSpecName "kube-api-access-4d7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.517526 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-config-data" (OuterVolumeSpecName: "config-data") pod "5b2bc62f-ae81-4ab9-81e3-310506ee057a" (UID: "5b2bc62f-ae81-4ab9-81e3-310506ee057a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.518532 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b2bc62f-ae81-4ab9-81e3-310506ee057a" (UID: "5b2bc62f-ae81-4ab9-81e3-310506ee057a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.581939 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.581974 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2bc62f-ae81-4ab9-81e3-310506ee057a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.581984 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7xj\" (UniqueName: \"kubernetes.io/projected/5b2bc62f-ae81-4ab9-81e3-310506ee057a-kube-api-access-4d7xj\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.651730 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.682930 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-scripts\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.683064 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-config-data\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.683099 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6sd\" (UniqueName: \"kubernetes.io/projected/a52996eb-0df4-4f58-9f44-186a19c25555-kube-api-access-5n6sd\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.683153 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-sg-core-conf-yaml\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.683210 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-run-httpd\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.683255 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-combined-ca-bundle\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.683323 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-log-httpd\") pod \"a52996eb-0df4-4f58-9f44-186a19c25555\" (UID: \"a52996eb-0df4-4f58-9f44-186a19c25555\") " Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.685771 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.687201 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52996eb-0df4-4f58-9f44-186a19c25555-kube-api-access-5n6sd" (OuterVolumeSpecName: "kube-api-access-5n6sd") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "kube-api-access-5n6sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.688563 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.695866 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-scripts" (OuterVolumeSpecName: "scripts") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.715812 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.785276 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.785318 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6sd\" (UniqueName: \"kubernetes.io/projected/a52996eb-0df4-4f58-9f44-186a19c25555-kube-api-access-5n6sd\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.785330 4892 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.785338 4892 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.785348 4892 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52996eb-0df4-4f58-9f44-186a19c25555-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.786808 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-config-data" (OuterVolumeSpecName: "config-data") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.806349 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a52996eb-0df4-4f58-9f44-186a19c25555" (UID: "a52996eb-0df4-4f58-9f44-186a19c25555"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.887895 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:29 crc kubenswrapper[4892]: I0122 09:30:29.887941 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52996eb-0df4-4f58-9f44-186a19c25555-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.333602 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.386926 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52996eb-0df4-4f58-9f44-186a19c25555","Type":"ContainerDied","Data":"0829d8f98842f78aa5990ec18866673c8dbe38f33a2cc4fcf4be4ff5e6c87743"} Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.386980 4892 scope.go:117] "RemoveContainer" containerID="6c3f030b07aaf1ba33c787866c6674bb9383137538173f28af52c6888e5ef677" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.387005 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.389156 4892 generic.go:334] "Generic (PLEG): container finished" podID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerID="f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703" exitCode=0 Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.389198 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a11d9dd9-7bad-489e-89c0-76aba2c3a239","Type":"ContainerDied","Data":"f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703"} Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.389250 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a11d9dd9-7bad-489e-89c0-76aba2c3a239","Type":"ContainerDied","Data":"7adc697c985e9148339a7baf516d5b7eaccc5041f5634326253f74b7e9935e1c"} Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.389255 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.389218 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.401247 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-combined-ca-bundle\") pod \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.401481 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrbbr\" (UniqueName: \"kubernetes.io/projected/a11d9dd9-7bad-489e-89c0-76aba2c3a239-kube-api-access-rrbbr\") pod \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.401637 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a11d9dd9-7bad-489e-89c0-76aba2c3a239-logs\") pod \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.401677 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-config-data\") pod \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\" (UID: \"a11d9dd9-7bad-489e-89c0-76aba2c3a239\") " Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.402117 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11d9dd9-7bad-489e-89c0-76aba2c3a239-logs" (OuterVolumeSpecName: "logs") pod "a11d9dd9-7bad-489e-89c0-76aba2c3a239" (UID: "a11d9dd9-7bad-489e-89c0-76aba2c3a239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.402216 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a11d9dd9-7bad-489e-89c0-76aba2c3a239-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.404750 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11d9dd9-7bad-489e-89c0-76aba2c3a239-kube-api-access-rrbbr" (OuterVolumeSpecName: "kube-api-access-rrbbr") pod "a11d9dd9-7bad-489e-89c0-76aba2c3a239" (UID: "a11d9dd9-7bad-489e-89c0-76aba2c3a239"). InnerVolumeSpecName "kube-api-access-rrbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.406702 4892 scope.go:117] "RemoveContainer" containerID="61b5fb18f1d0ea77e2a6f67803bf1ae56e54123ce84b8a3497473c386a59e3b6" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.427890 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.431102 4892 scope.go:117] "RemoveContainer" containerID="9b20bcd741a6581cfe6e10b52023336281a8eaaca91c9cdfe5c436015e86d937" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.453724 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.454032 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a11d9dd9-7bad-489e-89c0-76aba2c3a239" (UID: "a11d9dd9-7bad-489e-89c0-76aba2c3a239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.466674 4892 scope.go:117] "RemoveContainer" containerID="ab01d620c9250b713d766214a10e344eec5cee81aff018cfc5569bc384782257" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.468259 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-config-data" (OuterVolumeSpecName: "config-data") pod "a11d9dd9-7bad-489e-89c0-76aba2c3a239" (UID: "a11d9dd9-7bad-489e-89c0-76aba2c3a239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.478356 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.486966 4892 scope.go:117] "RemoveContainer" containerID="f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.491773 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499194 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499578 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="proxy-httpd" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499594 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="proxy-httpd" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499606 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-notification-agent" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499612 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-notification-agent" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499623 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-central-agent" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499630 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-central-agent" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499638 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" containerName="nova-scheduler-scheduler" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499644 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" containerName="nova-scheduler-scheduler" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499654 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="sg-core" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499660 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="sg-core" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499675 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-log" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499683 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-log" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.499691 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-api" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499697 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-api" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499855 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" containerName="nova-scheduler-scheduler" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499866 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="sg-core" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499878 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-notification-agent" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499887 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-log" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499896 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="proxy-httpd" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499907 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" containerName="nova-api-api" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.499917 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" containerName="ceilometer-central-agent" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.501705 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.503642 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.503829 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.504456 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.505520 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.505545 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrbbr\" (UniqueName: \"kubernetes.io/projected/a11d9dd9-7bad-489e-89c0-76aba2c3a239-kube-api-access-rrbbr\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.505555 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a11d9dd9-7bad-489e-89c0-76aba2c3a239-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.510872 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.512031 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.513753 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.514163 4892 scope.go:117] "RemoveContainer" containerID="52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.524242 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.537380 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.539325 4892 scope.go:117] "RemoveContainer" containerID="f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.539889 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703\": container with ID starting with f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703 not found: ID does not exist" containerID="f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.539924 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703"} err="failed to get container status \"f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703\": rpc error: code = NotFound desc = could not find container \"f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703\": container with ID starting with f58dbf1064e9566689856058bcc0614dfcbdb10d27e0ac4e4589051009fe0703 not found: ID does not exist" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.539953 4892 scope.go:117] "RemoveContainer" containerID="52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba" Jan 22 09:30:30 crc kubenswrapper[4892]: E0122 09:30:30.540474 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba\": container with ID starting with 52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba not found: ID does not exist" containerID="52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.540513 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba"} err="failed to get container status \"52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba\": rpc error: code = NotFound desc = could not find container \"52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba\": container with ID starting with 52e4f8b7eddb779cb01ca8681514635d1175f9a1fdf70fe525306332df6742ba not found: ID does not exist" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.606932 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607108 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-run-httpd\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607149 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztc2c\" (UniqueName: \"kubernetes.io/projected/d8d0e640-322a-4479-8603-64deae4d364a-kube-api-access-ztc2c\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607216 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607425 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-config-data\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607484 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s87r\" (UniqueName: \"kubernetes.io/projected/f98018bd-15b2-4676-8292-70e1f59e6a95-kube-api-access-6s87r\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607628 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-log-httpd\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607716 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-scripts\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607775 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607867 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.607968 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-config-data\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.709748 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztc2c\" (UniqueName: \"kubernetes.io/projected/d8d0e640-322a-4479-8603-64deae4d364a-kube-api-access-ztc2c\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.709842 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.709904 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-config-data\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.709942 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s87r\" (UniqueName: \"kubernetes.io/projected/f98018bd-15b2-4676-8292-70e1f59e6a95-kube-api-access-6s87r\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710033 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-log-httpd\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710096 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-scripts\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710165 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710216 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710265 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-config-data\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710374 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.710438 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-run-httpd\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.711206 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-run-httpd\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.711212 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-log-httpd\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.713528 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.714277 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-scripts\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.714334 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.714655 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-config-data\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.715803 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.715986 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.726722 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-config-data\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.731465 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztc2c\" (UniqueName: \"kubernetes.io/projected/d8d0e640-322a-4479-8603-64deae4d364a-kube-api-access-ztc2c\") pod \"nova-scheduler-0\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.749906 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s87r\" (UniqueName: \"kubernetes.io/projected/f98018bd-15b2-4676-8292-70e1f59e6a95-kube-api-access-6s87r\") pod \"ceilometer-0\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.749985 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.760678 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.788939 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.790810 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.792863 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.802724 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.812207 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvc7t\" (UniqueName: \"kubernetes.io/projected/e4bfa160-bff6-4872-a900-2c112fe4c587-kube-api-access-vvc7t\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.812313 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.812379 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-config-data\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.812491 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bfa160-bff6-4872-a900-2c112fe4c587-logs\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.833546 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.846227 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.913874 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-config-data\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.913957 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bfa160-bff6-4872-a900-2c112fe4c587-logs\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.914021 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvc7t\" (UniqueName: \"kubernetes.io/projected/e4bfa160-bff6-4872-a900-2c112fe4c587-kube-api-access-vvc7t\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.914066 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.914968 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bfa160-bff6-4872-a900-2c112fe4c587-logs\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.922942 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-config-data\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.927020 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:30 crc kubenswrapper[4892]: I0122 09:30:30.939875 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvc7t\" (UniqueName: \"kubernetes.io/projected/e4bfa160-bff6-4872-a900-2c112fe4c587-kube-api-access-vvc7t\") pod \"nova-api-0\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " pod="openstack/nova-api-0" Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.008261 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:31 crc kubenswrapper[4892]: W0122 09:30:31.308857 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8d0e640_322a_4479_8603_64deae4d364a.slice/crio-4a3b370ae5485749f3b3e15bd0a78127cad586ed31135dcad5baafdc926797f6 WatchSource:0}: Error finding container 4a3b370ae5485749f3b3e15bd0a78127cad586ed31135dcad5baafdc926797f6: Status 404 returned error can't find the container with id 4a3b370ae5485749f3b3e15bd0a78127cad586ed31135dcad5baafdc926797f6 Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.309487 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:30:31 crc kubenswrapper[4892]: W0122 09:30:31.318192 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf98018bd_15b2_4676_8292_70e1f59e6a95.slice/crio-fa7f64883278c33c1d1aba1e77805ad3bfc698fc90b60ee4f17cc8350680c4c3 WatchSource:0}: Error finding container fa7f64883278c33c1d1aba1e77805ad3bfc698fc90b60ee4f17cc8350680c4c3: Status 404 returned error can't find the container with id fa7f64883278c33c1d1aba1e77805ad3bfc698fc90b60ee4f17cc8350680c4c3 Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.320504 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.399098 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8d0e640-322a-4479-8603-64deae4d364a","Type":"ContainerStarted","Data":"4a3b370ae5485749f3b3e15bd0a78127cad586ed31135dcad5baafdc926797f6"} Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.401385 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerStarted","Data":"fa7f64883278c33c1d1aba1e77805ad3bfc698fc90b60ee4f17cc8350680c4c3"} Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.470195 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2bc62f-ae81-4ab9-81e3-310506ee057a" path="/var/lib/kubelet/pods/5b2bc62f-ae81-4ab9-81e3-310506ee057a/volumes" Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.483305 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11d9dd9-7bad-489e-89c0-76aba2c3a239" path="/var/lib/kubelet/pods/a11d9dd9-7bad-489e-89c0-76aba2c3a239/volumes" Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.484550 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52996eb-0df4-4f58-9f44-186a19c25555" path="/var/lib/kubelet/pods/a52996eb-0df4-4f58-9f44-186a19c25555/volumes" Jan 22 09:30:31 crc kubenswrapper[4892]: I0122 09:30:31.485538 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.416274 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4bfa160-bff6-4872-a900-2c112fe4c587","Type":"ContainerStarted","Data":"baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56"} Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.416711 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4bfa160-bff6-4872-a900-2c112fe4c587","Type":"ContainerStarted","Data":"bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67"} Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.416724 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4bfa160-bff6-4872-a900-2c112fe4c587","Type":"ContainerStarted","Data":"db130de3d75faa7861d0a938f7586ed421ebb73eb3c41f28eec3f417327e3935"} Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.421274 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerStarted","Data":"dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6"} Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.426718 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8d0e640-322a-4479-8603-64deae4d364a","Type":"ContainerStarted","Data":"6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82"} Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.434869 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.434848182 podStartE2EDuration="2.434848182s" podCreationTimestamp="2026-01-22 09:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:32.433758436 +0000 UTC m=+1202.277837499" watchObservedRunningTime="2026-01-22 09:30:32.434848182 +0000 UTC m=+1202.278927245" Jan 22 09:30:32 crc kubenswrapper[4892]: I0122 09:30:32.457576 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.457559253 podStartE2EDuration="2.457559253s" podCreationTimestamp="2026-01-22 09:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:32.449819196 +0000 UTC m=+1202.293898279" watchObservedRunningTime="2026-01-22 09:30:32.457559253 +0000 UTC m=+1202.301638316" Jan 22 09:30:33 crc kubenswrapper[4892]: I0122 09:30:33.436408 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerStarted","Data":"1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca"} Jan 22 09:30:34 crc kubenswrapper[4892]: I0122 09:30:34.449870 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerStarted","Data":"72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f"} Jan 22 09:30:34 crc kubenswrapper[4892]: I0122 09:30:34.765597 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 09:30:35 crc kubenswrapper[4892]: I0122 09:30:35.463707 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerStarted","Data":"2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda"} Jan 22 09:30:35 crc kubenswrapper[4892]: I0122 09:30:35.463892 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:30:35 crc kubenswrapper[4892]: I0122 09:30:35.489819 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.127974413 podStartE2EDuration="5.489803134s" podCreationTimestamp="2026-01-22 09:30:30 +0000 UTC" firstStartedPulling="2026-01-22 09:30:31.321573531 +0000 UTC m=+1201.165652594" lastFinishedPulling="2026-01-22 09:30:34.683402262 +0000 UTC m=+1204.527481315" observedRunningTime="2026-01-22 09:30:35.485789297 +0000 UTC m=+1205.329868360" watchObservedRunningTime="2026-01-22 09:30:35.489803134 +0000 UTC m=+1205.333882197" Jan 22 09:30:35 crc kubenswrapper[4892]: I0122 09:30:35.846608 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 09:30:36 crc kubenswrapper[4892]: I0122 09:30:36.739366 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 09:30:40 crc kubenswrapper[4892]: I0122 09:30:40.847247 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 09:30:40 crc kubenswrapper[4892]: I0122 09:30:40.895677 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 09:30:41 crc kubenswrapper[4892]: I0122 09:30:41.009591 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:30:41 crc kubenswrapper[4892]: I0122 09:30:41.009684 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:30:41 crc kubenswrapper[4892]: I0122 09:30:41.556171 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 09:30:42 crc kubenswrapper[4892]: I0122 09:30:42.092525 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:30:42 crc kubenswrapper[4892]: I0122 09:30:42.092878 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:30:46 crc kubenswrapper[4892]: I0122 09:30:46.323735 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:30:46 crc kubenswrapper[4892]: I0122 09:30:46.324537 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:30:47 crc kubenswrapper[4892]: E0122 09:30:47.514870 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcbf858_6b11_4f81_9782_00df8dad36cf.slice/crio-888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcbf858_6b11_4f81_9782_00df8dad36cf.slice/crio-conmon-888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea12e63_6eef_4df6_a9ad_261f657546c3.slice/crio-10a66b212df198e0f5926a20f8fd18efd6d09099bad778a4d4347c197076cf10.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.601405 4892 generic.go:334] "Generic (PLEG): container finished" podID="7ea12e63-6eef-4df6-a9ad-261f657546c3" containerID="10a66b212df198e0f5926a20f8fd18efd6d09099bad778a4d4347c197076cf10" exitCode=137 Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.601480 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ea12e63-6eef-4df6-a9ad-261f657546c3","Type":"ContainerDied","Data":"10a66b212df198e0f5926a20f8fd18efd6d09099bad778a4d4347c197076cf10"} Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.601512 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ea12e63-6eef-4df6-a9ad-261f657546c3","Type":"ContainerDied","Data":"36604fec81c7268b4811df920f654cd9f67a0340e2ea519e039a00ab8db1a962"} Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.601526 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36604fec81c7268b4811df920f654cd9f67a0340e2ea519e039a00ab8db1a962" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.603460 4892 generic.go:334] "Generic (PLEG): container finished" podID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerID="888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6" exitCode=137 Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.603498 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bcbf858-6b11-4f81-9782-00df8dad36cf","Type":"ContainerDied","Data":"888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6"} Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.673306 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.679845 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795530 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-combined-ca-bundle\") pod \"7ea12e63-6eef-4df6-a9ad-261f657546c3\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795611 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-config-data\") pod \"1bcbf858-6b11-4f81-9782-00df8dad36cf\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795647 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bcbf858-6b11-4f81-9782-00df8dad36cf-logs\") pod \"1bcbf858-6b11-4f81-9782-00df8dad36cf\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795782 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-config-data\") pod \"7ea12e63-6eef-4df6-a9ad-261f657546c3\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795875 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-combined-ca-bundle\") pod \"1bcbf858-6b11-4f81-9782-00df8dad36cf\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795925 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7gdc\" (UniqueName: \"kubernetes.io/projected/7ea12e63-6eef-4df6-a9ad-261f657546c3-kube-api-access-b7gdc\") pod \"7ea12e63-6eef-4df6-a9ad-261f657546c3\" (UID: \"7ea12e63-6eef-4df6-a9ad-261f657546c3\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795983 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8h8r\" (UniqueName: \"kubernetes.io/projected/1bcbf858-6b11-4f81-9782-00df8dad36cf-kube-api-access-t8h8r\") pod \"1bcbf858-6b11-4f81-9782-00df8dad36cf\" (UID: \"1bcbf858-6b11-4f81-9782-00df8dad36cf\") " Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.795984 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bcbf858-6b11-4f81-9782-00df8dad36cf-logs" (OuterVolumeSpecName: "logs") pod "1bcbf858-6b11-4f81-9782-00df8dad36cf" (UID: "1bcbf858-6b11-4f81-9782-00df8dad36cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.796663 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bcbf858-6b11-4f81-9782-00df8dad36cf-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.801067 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea12e63-6eef-4df6-a9ad-261f657546c3-kube-api-access-b7gdc" (OuterVolumeSpecName: "kube-api-access-b7gdc") pod "7ea12e63-6eef-4df6-a9ad-261f657546c3" (UID: "7ea12e63-6eef-4df6-a9ad-261f657546c3"). InnerVolumeSpecName "kube-api-access-b7gdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.801326 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcbf858-6b11-4f81-9782-00df8dad36cf-kube-api-access-t8h8r" (OuterVolumeSpecName: "kube-api-access-t8h8r") pod "1bcbf858-6b11-4f81-9782-00df8dad36cf" (UID: "1bcbf858-6b11-4f81-9782-00df8dad36cf"). InnerVolumeSpecName "kube-api-access-t8h8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.820753 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-config-data" (OuterVolumeSpecName: "config-data") pod "1bcbf858-6b11-4f81-9782-00df8dad36cf" (UID: "1bcbf858-6b11-4f81-9782-00df8dad36cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.823051 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ea12e63-6eef-4df6-a9ad-261f657546c3" (UID: "7ea12e63-6eef-4df6-a9ad-261f657546c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.827120 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-config-data" (OuterVolumeSpecName: "config-data") pod "7ea12e63-6eef-4df6-a9ad-261f657546c3" (UID: "7ea12e63-6eef-4df6-a9ad-261f657546c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.831081 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bcbf858-6b11-4f81-9782-00df8dad36cf" (UID: "1bcbf858-6b11-4f81-9782-00df8dad36cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.898944 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.899005 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.899016 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea12e63-6eef-4df6-a9ad-261f657546c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.899026 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbf858-6b11-4f81-9782-00df8dad36cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.899041 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7gdc\" (UniqueName: \"kubernetes.io/projected/7ea12e63-6eef-4df6-a9ad-261f657546c3-kube-api-access-b7gdc\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:47 crc kubenswrapper[4892]: I0122 09:30:47.899055 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8h8r\" (UniqueName: \"kubernetes.io/projected/1bcbf858-6b11-4f81-9782-00df8dad36cf-kube-api-access-t8h8r\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.622985 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.623041 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.623063 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bcbf858-6b11-4f81-9782-00df8dad36cf","Type":"ContainerDied","Data":"c97273d47c2166241e06185b7150e9b0697f779ce3762055219d2a3a144fb6ae"} Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.623175 4892 scope.go:117] "RemoveContainer" containerID="888c1f0cc605d7877bf0ee74ced8880290b210d0606653ba8ceb31a0e38e3bf6" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.666803 4892 scope.go:117] "RemoveContainer" containerID="3ad9e55a1b9cca0bc197f2971bc76ad158d8a8e368a502f274c7d4b87462235c" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.670578 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.679885 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.697033 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.710207 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.719563 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: E0122 09:30:48.720044 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea12e63-6eef-4df6-a9ad-261f657546c3" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.720066 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea12e63-6eef-4df6-a9ad-261f657546c3" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 09:30:48 crc kubenswrapper[4892]: E0122 09:30:48.720108 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-log" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.720121 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-log" Jan 22 09:30:48 crc kubenswrapper[4892]: E0122 09:30:48.720157 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-metadata" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.720167 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-metadata" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.720438 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-log" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.720474 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea12e63-6eef-4df6-a9ad-261f657546c3" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.720489 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" containerName="nova-metadata-metadata" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.721253 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.724492 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.736108 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.736355 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.736360 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.738215 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.742313 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.743527 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.778643 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.793946 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819515 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtp94\" (UniqueName: \"kubernetes.io/projected/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-kube-api-access-rtp94\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819559 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819588 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819711 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-logs\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819765 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819874 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.819955 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.820040 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-config-data\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.820068 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxdnx\" (UniqueName: \"kubernetes.io/projected/41e0d5e8-eee8-4d06-ae1f-fec66e793078-kube-api-access-fxdnx\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.820097 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.921866 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtp94\" (UniqueName: \"kubernetes.io/projected/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-kube-api-access-rtp94\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.921920 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.921950 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.921991 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-logs\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922022 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922075 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922125 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922189 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-config-data\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922223 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxdnx\" (UniqueName: \"kubernetes.io/projected/41e0d5e8-eee8-4d06-ae1f-fec66e793078-kube-api-access-fxdnx\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922255 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.922786 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-logs\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.928591 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.928605 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.928636 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.929863 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.932980 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-config-data\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.933468 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e0d5e8-eee8-4d06-ae1f-fec66e793078-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.939774 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.940181 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtp94\" (UniqueName: \"kubernetes.io/projected/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-kube-api-access-rtp94\") pod \"nova-metadata-0\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " pod="openstack/nova-metadata-0" Jan 22 09:30:48 crc kubenswrapper[4892]: I0122 09:30:48.945277 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxdnx\" (UniqueName: \"kubernetes.io/projected/41e0d5e8-eee8-4d06-ae1f-fec66e793078-kube-api-access-fxdnx\") pod \"nova-cell1-novncproxy-0\" (UID: \"41e0d5e8-eee8-4d06-ae1f-fec66e793078\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.060748 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.100834 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.442274 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcbf858-6b11-4f81-9782-00df8dad36cf" path="/var/lib/kubelet/pods/1bcbf858-6b11-4f81-9782-00df8dad36cf/volumes" Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.443175 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea12e63-6eef-4df6-a9ad-261f657546c3" path="/var/lib/kubelet/pods/7ea12e63-6eef-4df6-a9ad-261f657546c3/volumes" Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.570616 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:30:49 crc kubenswrapper[4892]: W0122 09:30:49.572416 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b5fefad_3b0d_4d26_8fe3_2e117ea96708.slice/crio-291d3802b998483f55a9c2b8676cfba8ef526fcb77340977e6d18ff24912bcc9 WatchSource:0}: Error finding container 291d3802b998483f55a9c2b8676cfba8ef526fcb77340977e6d18ff24912bcc9: Status 404 returned error can't find the container with id 291d3802b998483f55a9c2b8676cfba8ef526fcb77340977e6d18ff24912bcc9 Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.631227 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:30:49 crc kubenswrapper[4892]: I0122 09:30:49.633191 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b5fefad-3b0d-4d26-8fe3-2e117ea96708","Type":"ContainerStarted","Data":"291d3802b998483f55a9c2b8676cfba8ef526fcb77340977e6d18ff24912bcc9"} Jan 22 09:30:50 crc kubenswrapper[4892]: I0122 09:30:50.647214 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b5fefad-3b0d-4d26-8fe3-2e117ea96708","Type":"ContainerStarted","Data":"d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297"} Jan 22 09:30:50 crc kubenswrapper[4892]: I0122 09:30:50.647541 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b5fefad-3b0d-4d26-8fe3-2e117ea96708","Type":"ContainerStarted","Data":"81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764"} Jan 22 09:30:50 crc kubenswrapper[4892]: I0122 09:30:50.650042 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"41e0d5e8-eee8-4d06-ae1f-fec66e793078","Type":"ContainerStarted","Data":"dcce0fd8c3845e514e782eaf2b8450937893b42570b6593916ba1e88a42c09e7"} Jan 22 09:30:50 crc kubenswrapper[4892]: I0122 09:30:50.650111 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"41e0d5e8-eee8-4d06-ae1f-fec66e793078","Type":"ContainerStarted","Data":"d9ada4fd0a1b788cc6a6b6a43a0a6f620ffebfe9c197e497c740f11edc0f846f"} Jan 22 09:30:50 crc kubenswrapper[4892]: I0122 09:30:50.676726 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6767049 podStartE2EDuration="2.6767049s" podCreationTimestamp="2026-01-22 09:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:50.66804962 +0000 UTC m=+1220.512128703" watchObservedRunningTime="2026-01-22 09:30:50.6767049 +0000 UTC m=+1220.520783963" Jan 22 09:30:50 crc kubenswrapper[4892]: I0122 09:30:50.688912 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.688890546 podStartE2EDuration="2.688890546s" podCreationTimestamp="2026-01-22 09:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:50.68247641 +0000 UTC m=+1220.526555493" watchObservedRunningTime="2026-01-22 09:30:50.688890546 +0000 UTC m=+1220.532969639" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.012370 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.012789 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.013159 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.013194 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.015579 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.016409 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.224927 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-rdw7w"] Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.228808 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.246101 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-rdw7w"] Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.380466 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-svc\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.380557 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.380589 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.380645 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssvm\" (UniqueName: \"kubernetes.io/projected/ce583610-456f-400a-b840-9d2c65ecf4a6-kube-api-access-mssvm\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.380768 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.380934 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-config\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.482759 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.483821 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.483897 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.484000 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mssvm\" (UniqueName: \"kubernetes.io/projected/ce583610-456f-400a-b840-9d2c65ecf4a6-kube-api-access-mssvm\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.484416 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.484520 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-config\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.484590 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-svc\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.484742 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.485488 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-svc\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.485815 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.486210 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-config\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.510137 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mssvm\" (UniqueName: \"kubernetes.io/projected/ce583610-456f-400a-b840-9d2c65ecf4a6-kube-api-access-mssvm\") pod \"dnsmasq-dns-5ddd577785-rdw7w\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:51 crc kubenswrapper[4892]: I0122 09:30:51.560067 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:52 crc kubenswrapper[4892]: I0122 09:30:52.005036 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-rdw7w"] Jan 22 09:30:52 crc kubenswrapper[4892]: I0122 09:30:52.686264 4892 generic.go:334] "Generic (PLEG): container finished" podID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerID="7505806c1d05928ae4c7edfdd63022bca6230d3e12753e8266f34705c97849ce" exitCode=0 Jan 22 09:30:52 crc kubenswrapper[4892]: I0122 09:30:52.686426 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" event={"ID":"ce583610-456f-400a-b840-9d2c65ecf4a6","Type":"ContainerDied","Data":"7505806c1d05928ae4c7edfdd63022bca6230d3e12753e8266f34705c97849ce"} Jan 22 09:30:52 crc kubenswrapper[4892]: I0122 09:30:52.686786 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" event={"ID":"ce583610-456f-400a-b840-9d2c65ecf4a6","Type":"ContainerStarted","Data":"48c2d0fdfa9849cf575c0e48d34a275a6fbb901f48e60c30a5efd3dae0c34fc4"} Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.161211 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.161489 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-central-agent" containerID="cri-o://dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6" gracePeriod=30 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.161592 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-notification-agent" containerID="cri-o://1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca" gracePeriod=30 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.161545 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="sg-core" containerID="cri-o://72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f" gracePeriod=30 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.161602 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="proxy-httpd" containerID="cri-o://2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda" gracePeriod=30 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.166532 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": EOF" Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.432552 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.698369 4892 generic.go:334] "Generic (PLEG): container finished" podID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerID="2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda" exitCode=0 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.698407 4892 generic.go:334] "Generic (PLEG): container finished" podID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerID="72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f" exitCode=2 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.698415 4892 generic.go:334] "Generic (PLEG): container finished" podID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerID="dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6" exitCode=0 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.698453 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerDied","Data":"2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda"} Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.698479 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerDied","Data":"72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f"} Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.698488 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerDied","Data":"dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6"} Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.701916 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" event={"ID":"ce583610-456f-400a-b840-9d2c65ecf4a6","Type":"ContainerStarted","Data":"ade6274ed31e4c12f5e3b614cb123e1a8eb78d2efa08a847db314714639be0cf"} Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.701984 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-log" containerID="cri-o://bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67" gracePeriod=30 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.702039 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-api" containerID="cri-o://baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56" gracePeriod=30 Jan 22 09:30:53 crc kubenswrapper[4892]: I0122 09:30:53.729956 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" podStartSLOduration=2.729936121 podStartE2EDuration="2.729936121s" podCreationTimestamp="2026-01-22 09:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:53.727548253 +0000 UTC m=+1223.571627326" watchObservedRunningTime="2026-01-22 09:30:53.729936121 +0000 UTC m=+1223.574015194" Jan 22 09:30:54 crc kubenswrapper[4892]: I0122 09:30:54.061141 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:54 crc kubenswrapper[4892]: I0122 09:30:54.101725 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:30:54 crc kubenswrapper[4892]: I0122 09:30:54.101774 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:30:54 crc kubenswrapper[4892]: I0122 09:30:54.711847 4892 generic.go:334] "Generic (PLEG): container finished" podID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerID="bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67" exitCode=143 Jan 22 09:30:54 crc kubenswrapper[4892]: I0122 09:30:54.711927 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4bfa160-bff6-4872-a900-2c112fe4c587","Type":"ContainerDied","Data":"bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67"} Jan 22 09:30:54 crc kubenswrapper[4892]: I0122 09:30:54.712429 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.376696 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.525635 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-config-data\") pod \"e4bfa160-bff6-4872-a900-2c112fe4c587\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.525730 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvc7t\" (UniqueName: \"kubernetes.io/projected/e4bfa160-bff6-4872-a900-2c112fe4c587-kube-api-access-vvc7t\") pod \"e4bfa160-bff6-4872-a900-2c112fe4c587\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.525776 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bfa160-bff6-4872-a900-2c112fe4c587-logs\") pod \"e4bfa160-bff6-4872-a900-2c112fe4c587\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.525836 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-combined-ca-bundle\") pod \"e4bfa160-bff6-4872-a900-2c112fe4c587\" (UID: \"e4bfa160-bff6-4872-a900-2c112fe4c587\") " Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.526616 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bfa160-bff6-4872-a900-2c112fe4c587-logs" (OuterVolumeSpecName: "logs") pod "e4bfa160-bff6-4872-a900-2c112fe4c587" (UID: "e4bfa160-bff6-4872-a900-2c112fe4c587"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.531914 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bfa160-bff6-4872-a900-2c112fe4c587-kube-api-access-vvc7t" (OuterVolumeSpecName: "kube-api-access-vvc7t") pod "e4bfa160-bff6-4872-a900-2c112fe4c587" (UID: "e4bfa160-bff6-4872-a900-2c112fe4c587"). InnerVolumeSpecName "kube-api-access-vvc7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.562844 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4bfa160-bff6-4872-a900-2c112fe4c587" (UID: "e4bfa160-bff6-4872-a900-2c112fe4c587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.570464 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-config-data" (OuterVolumeSpecName: "config-data") pod "e4bfa160-bff6-4872-a900-2c112fe4c587" (UID: "e4bfa160-bff6-4872-a900-2c112fe4c587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.628153 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvc7t\" (UniqueName: \"kubernetes.io/projected/e4bfa160-bff6-4872-a900-2c112fe4c587-kube-api-access-vvc7t\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.628180 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bfa160-bff6-4872-a900-2c112fe4c587-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.628190 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.628198 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bfa160-bff6-4872-a900-2c112fe4c587-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.742885 4892 generic.go:334] "Generic (PLEG): container finished" podID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerID="baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56" exitCode=0 Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.742915 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.742936 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4bfa160-bff6-4872-a900-2c112fe4c587","Type":"ContainerDied","Data":"baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56"} Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.742969 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4bfa160-bff6-4872-a900-2c112fe4c587","Type":"ContainerDied","Data":"db130de3d75faa7861d0a938f7586ed421ebb73eb3c41f28eec3f417327e3935"} Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.742988 4892 scope.go:117] "RemoveContainer" containerID="baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.763490 4892 scope.go:117] "RemoveContainer" containerID="bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.781762 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.791269 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.797514 4892 scope.go:117] "RemoveContainer" containerID="baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56" Jan 22 09:30:57 crc kubenswrapper[4892]: E0122 09:30:57.801458 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56\": container with ID starting with baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56 not found: ID does not exist" containerID="baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.801516 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56"} err="failed to get container status \"baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56\": rpc error: code = NotFound desc = could not find container \"baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56\": container with ID starting with baf74012e96de11ff1f814df033e29ca9c6a45b6a7bf9c8f62f2a4b3fa76bc56 not found: ID does not exist" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.801552 4892 scope.go:117] "RemoveContainer" containerID="bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67" Jan 22 09:30:57 crc kubenswrapper[4892]: E0122 09:30:57.802552 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67\": container with ID starting with bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67 not found: ID does not exist" containerID="bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.802591 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67"} err="failed to get container status \"bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67\": rpc error: code = NotFound desc = could not find container \"bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67\": container with ID starting with bd1026a45dad2806fb10fb1cb674b50b6870a8a0a21b8a57ccbd009935a7dd67 not found: ID does not exist" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.810003 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:57 crc kubenswrapper[4892]: E0122 09:30:57.811245 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-log" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.811267 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-log" Jan 22 09:30:57 crc kubenswrapper[4892]: E0122 09:30:57.811312 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-api" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.811319 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-api" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.811505 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-log" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.811533 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" containerName="nova-api-api" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.812440 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.814766 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.814833 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.816143 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.821213 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.932365 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-public-tls-certs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.932733 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822ws\" (UniqueName: \"kubernetes.io/projected/9a3ff585-adf7-489e-8987-74d52e3cbe73-kube-api-access-822ws\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.932757 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a3ff585-adf7-489e-8987-74d52e3cbe73-logs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.932980 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.933143 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:57 crc kubenswrapper[4892]: I0122 09:30:57.933252 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-config-data\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035077 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035130 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035174 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-config-data\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035241 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-public-tls-certs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035279 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-822ws\" (UniqueName: \"kubernetes.io/projected/9a3ff585-adf7-489e-8987-74d52e3cbe73-kube-api-access-822ws\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035317 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a3ff585-adf7-489e-8987-74d52e3cbe73-logs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.035820 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a3ff585-adf7-489e-8987-74d52e3cbe73-logs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.041171 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-public-tls-certs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.041747 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.042454 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-config-data\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.042710 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.051337 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-822ws\" (UniqueName: \"kubernetes.io/projected/9a3ff585-adf7-489e-8987-74d52e3cbe73-kube-api-access-822ws\") pod \"nova-api-0\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.127699 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.623399 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.675760 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.751768 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9a3ff585-adf7-489e-8987-74d52e3cbe73","Type":"ContainerStarted","Data":"3a6ae3f28d965014c61dd766cec3a053a866e92d4893255b5b3173dd30dabc8f"} Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.756022 4892 generic.go:334] "Generic (PLEG): container finished" podID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerID="1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca" exitCode=0 Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.756057 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerDied","Data":"1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca"} Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.756119 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f98018bd-15b2-4676-8292-70e1f59e6a95","Type":"ContainerDied","Data":"fa7f64883278c33c1d1aba1e77805ad3bfc698fc90b60ee4f17cc8350680c4c3"} Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.756142 4892 scope.go:117] "RemoveContainer" containerID="2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.756250 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.775268 4892 scope.go:117] "RemoveContainer" containerID="72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.801887 4892 scope.go:117] "RemoveContainer" containerID="1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.823014 4892 scope.go:117] "RemoveContainer" containerID="dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.844787 4892 scope.go:117] "RemoveContainer" containerID="2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda" Jan 22 09:30:58 crc kubenswrapper[4892]: E0122 09:30:58.870021 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda\": container with ID starting with 2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda not found: ID does not exist" containerID="2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.870064 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda"} err="failed to get container status \"2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda\": rpc error: code = NotFound desc = could not find container \"2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda\": container with ID starting with 2e133bf26da08058983692c47122d81b9912dfb4dfc3847940024d682a445bda not found: ID does not exist" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.870093 4892 scope.go:117] "RemoveContainer" containerID="72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f" Jan 22 09:30:58 crc kubenswrapper[4892]: E0122 09:30:58.870433 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f\": container with ID starting with 72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f not found: ID does not exist" containerID="72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.870480 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f"} err="failed to get container status \"72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f\": rpc error: code = NotFound desc = could not find container \"72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f\": container with ID starting with 72517a55218c3a1b2a3d07e44b65b0bc918c3574175b7a737ca8ef62fe780e3f not found: ID does not exist" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.870511 4892 scope.go:117] "RemoveContainer" containerID="1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca" Jan 22 09:30:58 crc kubenswrapper[4892]: E0122 09:30:58.870817 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca\": container with ID starting with 1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca not found: ID does not exist" containerID="1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.870854 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca"} err="failed to get container status \"1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca\": rpc error: code = NotFound desc = could not find container \"1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca\": container with ID starting with 1bda43a6db1543f0b4d10371e0ff75775d6df075e836aa88200d595d74b696ca not found: ID does not exist" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.870875 4892 scope.go:117] "RemoveContainer" containerID="dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871324 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-log-httpd\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871363 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-combined-ca-bundle\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871435 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-scripts\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871484 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-ceilometer-tls-certs\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871511 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-config-data\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871535 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-run-httpd\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.871672 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-sg-core-conf-yaml\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.872022 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s87r\" (UniqueName: \"kubernetes.io/projected/f98018bd-15b2-4676-8292-70e1f59e6a95-kube-api-access-6s87r\") pod \"f98018bd-15b2-4676-8292-70e1f59e6a95\" (UID: \"f98018bd-15b2-4676-8292-70e1f59e6a95\") " Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.872192 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.872240 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: E0122 09:30:58.872739 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6\": container with ID starting with dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6 not found: ID does not exist" containerID="dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.872782 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6"} err="failed to get container status \"dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6\": rpc error: code = NotFound desc = could not find container \"dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6\": container with ID starting with dac45c374f8cb8181ccaf5a603249669a4c8e5812a0f005c377ae9b7538e95b6 not found: ID does not exist" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.872755 4892 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.872811 4892 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f98018bd-15b2-4676-8292-70e1f59e6a95-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.876080 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98018bd-15b2-4676-8292-70e1f59e6a95-kube-api-access-6s87r" (OuterVolumeSpecName: "kube-api-access-6s87r") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "kube-api-access-6s87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.876691 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-scripts" (OuterVolumeSpecName: "scripts") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.910878 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.929212 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.961519 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.974904 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.974946 4892 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.974964 4892 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.974975 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s87r\" (UniqueName: \"kubernetes.io/projected/f98018bd-15b2-4676-8292-70e1f59e6a95-kube-api-access-6s87r\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.974986 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:58 crc kubenswrapper[4892]: I0122 09:30:58.992492 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-config-data" (OuterVolumeSpecName: "config-data") pod "f98018bd-15b2-4676-8292-70e1f59e6a95" (UID: "f98018bd-15b2-4676-8292-70e1f59e6a95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.061991 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.075862 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98018bd-15b2-4676-8292-70e1f59e6a95-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.077918 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.101786 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.101848 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.210503 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.221042 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.242642 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:59 crc kubenswrapper[4892]: E0122 09:30:59.243095 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-notification-agent" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243117 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-notification-agent" Jan 22 09:30:59 crc kubenswrapper[4892]: E0122 09:30:59.243135 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="sg-core" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243144 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="sg-core" Jan 22 09:30:59 crc kubenswrapper[4892]: E0122 09:30:59.243177 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-central-agent" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243185 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-central-agent" Jan 22 09:30:59 crc kubenswrapper[4892]: E0122 09:30:59.243194 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="proxy-httpd" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243201 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="proxy-httpd" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243481 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="sg-core" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243512 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-central-agent" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243523 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="proxy-httpd" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.243538 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" containerName="ceilometer-notification-agent" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.245475 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.245584 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.249371 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.249545 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.249599 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.382840 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383185 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383241 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08f54a7-5e8e-4143-8585-1c91201b25df-run-httpd\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383308 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-scripts\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383369 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6v68\" (UniqueName: \"kubernetes.io/projected/b08f54a7-5e8e-4143-8585-1c91201b25df-kube-api-access-g6v68\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383399 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383463 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-config-data\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.383502 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08f54a7-5e8e-4143-8585-1c91201b25df-log-httpd\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.432926 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bfa160-bff6-4872-a900-2c112fe4c587" path="/var/lib/kubelet/pods/e4bfa160-bff6-4872-a900-2c112fe4c587/volumes" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.433671 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98018bd-15b2-4676-8292-70e1f59e6a95" path="/var/lib/kubelet/pods/f98018bd-15b2-4676-8292-70e1f59e6a95/volumes" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.484926 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-scripts\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485015 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6v68\" (UniqueName: \"kubernetes.io/projected/b08f54a7-5e8e-4143-8585-1c91201b25df-kube-api-access-g6v68\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485050 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485161 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-config-data\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485208 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08f54a7-5e8e-4143-8585-1c91201b25df-log-httpd\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485337 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485369 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.485438 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08f54a7-5e8e-4143-8585-1c91201b25df-run-httpd\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.486528 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08f54a7-5e8e-4143-8585-1c91201b25df-run-httpd\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.487115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08f54a7-5e8e-4143-8585-1c91201b25df-log-httpd\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.490209 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-scripts\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.490770 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.493078 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.501591 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.506991 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6v68\" (UniqueName: \"kubernetes.io/projected/b08f54a7-5e8e-4143-8585-1c91201b25df-kube-api-access-g6v68\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.513523 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08f54a7-5e8e-4143-8585-1c91201b25df-config-data\") pod \"ceilometer-0\" (UID: \"b08f54a7-5e8e-4143-8585-1c91201b25df\") " pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.567569 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.769587 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9a3ff585-adf7-489e-8987-74d52e3cbe73","Type":"ContainerStarted","Data":"794c20db8cf23553179dfbd167c65b91cb4c05318f36e5ebd1e4d5b136595a72"} Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.769958 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9a3ff585-adf7-489e-8987-74d52e3cbe73","Type":"ContainerStarted","Data":"6bd32c35b0a3073557125f764e8da7f57823bdb7d2d4c27ff16d2df3009901d0"} Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.792677 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.795326 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.795306065 podStartE2EDuration="2.795306065s" podCreationTimestamp="2026-01-22 09:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:59.792917857 +0000 UTC m=+1229.636996920" watchObservedRunningTime="2026-01-22 09:30:59.795306065 +0000 UTC m=+1229.639385128" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.930052 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-86jc5"] Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.931274 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.944043 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-86jc5"] Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.978674 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 09:30:59 crc kubenswrapper[4892]: I0122 09:30:59.978849 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 09:31:00 crc kubenswrapper[4892]: W0122 09:31:00.038932 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08f54a7_5e8e_4143_8585_1c91201b25df.slice/crio-0c1f7a94b40aa0906166112ea1b690e4157426bdccdce7801b1c4130ed7034a6 WatchSource:0}: Error finding container 0c1f7a94b40aa0906166112ea1b690e4157426bdccdce7801b1c4130ed7034a6: Status 404 returned error can't find the container with id 0c1f7a94b40aa0906166112ea1b690e4157426bdccdce7801b1c4130ed7034a6 Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.039721 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.097646 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqrs\" (UniqueName: \"kubernetes.io/projected/180a0abc-388c-4a6a-bd24-91a416481a38-kube-api-access-btqrs\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.097709 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.097772 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-config-data\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.097816 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-scripts\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.115441 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.115441 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.199314 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btqrs\" (UniqueName: \"kubernetes.io/projected/180a0abc-388c-4a6a-bd24-91a416481a38-kube-api-access-btqrs\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.199371 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.199405 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-config-data\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.199449 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-scripts\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.205851 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-scripts\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.206466 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.208434 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-config-data\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.216149 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btqrs\" (UniqueName: \"kubernetes.io/projected/180a0abc-388c-4a6a-bd24-91a416481a38-kube-api-access-btqrs\") pod \"nova-cell1-cell-mapping-86jc5\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.315347 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.777521 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-86jc5"] Jan 22 09:31:00 crc kubenswrapper[4892]: W0122 09:31:00.779089 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod180a0abc_388c_4a6a_bd24_91a416481a38.slice/crio-ea2d443a3bb9bdaa8bbda4be1e2e2710fdec9b36e97041672617635565fc7282 WatchSource:0}: Error finding container ea2d443a3bb9bdaa8bbda4be1e2e2710fdec9b36e97041672617635565fc7282: Status 404 returned error can't find the container with id ea2d443a3bb9bdaa8bbda4be1e2e2710fdec9b36e97041672617635565fc7282 Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.789012 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08f54a7-5e8e-4143-8585-1c91201b25df","Type":"ContainerStarted","Data":"b9b5727041111b0adb655d9e80c8290b3d3660f0e94f27415d96efa1568bc6cb"} Jan 22 09:31:00 crc kubenswrapper[4892]: I0122 09:31:00.789066 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08f54a7-5e8e-4143-8585-1c91201b25df","Type":"ContainerStarted","Data":"0c1f7a94b40aa0906166112ea1b690e4157426bdccdce7801b1c4130ed7034a6"} Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.561464 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.644128 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-7x292"] Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.644418 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" podUID="310458cb-5d40-4525-a26e-0df3583401c7" containerName="dnsmasq-dns" containerID="cri-o://a8f0efec2021aad932c80bd5e1218c2ffd6057a968f52efde6353b3fa3e1ac52" gracePeriod=10 Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.815347 4892 generic.go:334] "Generic (PLEG): container finished" podID="310458cb-5d40-4525-a26e-0df3583401c7" containerID="a8f0efec2021aad932c80bd5e1218c2ffd6057a968f52efde6353b3fa3e1ac52" exitCode=0 Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.815411 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" event={"ID":"310458cb-5d40-4525-a26e-0df3583401c7","Type":"ContainerDied","Data":"a8f0efec2021aad932c80bd5e1218c2ffd6057a968f52efde6353b3fa3e1ac52"} Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.818012 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86jc5" event={"ID":"180a0abc-388c-4a6a-bd24-91a416481a38","Type":"ContainerStarted","Data":"f21ff5de1f4cad085f5b1c2c542fff98cd191d78b3c83e1f235e31deb9a6be5b"} Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.818056 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86jc5" event={"ID":"180a0abc-388c-4a6a-bd24-91a416481a38","Type":"ContainerStarted","Data":"ea2d443a3bb9bdaa8bbda4be1e2e2710fdec9b36e97041672617635565fc7282"} Jan 22 09:31:01 crc kubenswrapper[4892]: I0122 09:31:01.834787 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-86jc5" podStartSLOduration=2.834771118 podStartE2EDuration="2.834771118s" podCreationTimestamp="2026-01-22 09:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:31:01.832953904 +0000 UTC m=+1231.677032967" watchObservedRunningTime="2026-01-22 09:31:01.834771118 +0000 UTC m=+1231.678850191" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.188848 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.351947 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-config\") pod \"310458cb-5d40-4525-a26e-0df3583401c7\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.352000 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msh5\" (UniqueName: \"kubernetes.io/projected/310458cb-5d40-4525-a26e-0df3583401c7-kube-api-access-8msh5\") pod \"310458cb-5d40-4525-a26e-0df3583401c7\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.352034 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-sb\") pod \"310458cb-5d40-4525-a26e-0df3583401c7\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.352057 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-swift-storage-0\") pod \"310458cb-5d40-4525-a26e-0df3583401c7\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.352095 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-svc\") pod \"310458cb-5d40-4525-a26e-0df3583401c7\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.352164 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-nb\") pod \"310458cb-5d40-4525-a26e-0df3583401c7\" (UID: \"310458cb-5d40-4525-a26e-0df3583401c7\") " Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.356616 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310458cb-5d40-4525-a26e-0df3583401c7-kube-api-access-8msh5" (OuterVolumeSpecName: "kube-api-access-8msh5") pod "310458cb-5d40-4525-a26e-0df3583401c7" (UID: "310458cb-5d40-4525-a26e-0df3583401c7"). InnerVolumeSpecName "kube-api-access-8msh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.409107 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "310458cb-5d40-4525-a26e-0df3583401c7" (UID: "310458cb-5d40-4525-a26e-0df3583401c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.412076 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "310458cb-5d40-4525-a26e-0df3583401c7" (UID: "310458cb-5d40-4525-a26e-0df3583401c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.427696 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-config" (OuterVolumeSpecName: "config") pod "310458cb-5d40-4525-a26e-0df3583401c7" (UID: "310458cb-5d40-4525-a26e-0df3583401c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.432855 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "310458cb-5d40-4525-a26e-0df3583401c7" (UID: "310458cb-5d40-4525-a26e-0df3583401c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.453889 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.453921 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.453933 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msh5\" (UniqueName: \"kubernetes.io/projected/310458cb-5d40-4525-a26e-0df3583401c7-kube-api-access-8msh5\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.453943 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.453952 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.454796 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "310458cb-5d40-4525-a26e-0df3583401c7" (UID: "310458cb-5d40-4525-a26e-0df3583401c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.555498 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/310458cb-5d40-4525-a26e-0df3583401c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.836916 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08f54a7-5e8e-4143-8585-1c91201b25df","Type":"ContainerStarted","Data":"3e8516e14baa5d33ad1f3c4056edc910ddbb41fcf2f7ae09b09d928ff612a7b5"} Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.837804 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08f54a7-5e8e-4143-8585-1c91201b25df","Type":"ContainerStarted","Data":"4c514cba06d52df8ccdddcb946033669fa2dd47e0a6b76849287dd09bfa207f7"} Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.842481 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.842472 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-7x292" event={"ID":"310458cb-5d40-4525-a26e-0df3583401c7","Type":"ContainerDied","Data":"c3d14e81d3e92ba1a9b57eed245d802b6bb5d418ab52a22359355b6327c91d96"} Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.842775 4892 scope.go:117] "RemoveContainer" containerID="a8f0efec2021aad932c80bd5e1218c2ffd6057a968f52efde6353b3fa3e1ac52" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.885508 4892 scope.go:117] "RemoveContainer" containerID="547e82401fb7ede95e3c2f650d577d3fd954c539b28cb0701064b43980e4d108" Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.890457 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-7x292"] Jan 22 09:31:02 crc kubenswrapper[4892]: I0122 09:31:02.903503 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-7x292"] Jan 22 09:31:03 crc kubenswrapper[4892]: I0122 09:31:03.428013 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310458cb-5d40-4525-a26e-0df3583401c7" path="/var/lib/kubelet/pods/310458cb-5d40-4525-a26e-0df3583401c7/volumes" Jan 22 09:31:04 crc kubenswrapper[4892]: I0122 09:31:04.864683 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08f54a7-5e8e-4143-8585-1c91201b25df","Type":"ContainerStarted","Data":"d89169ae5cefd225197fe0eef1ad1ff4d8a1f803bbd125be4219e52c48020f4b"} Jan 22 09:31:04 crc kubenswrapper[4892]: I0122 09:31:04.865527 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:31:04 crc kubenswrapper[4892]: I0122 09:31:04.889873 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.996527048 podStartE2EDuration="5.889837661s" podCreationTimestamp="2026-01-22 09:30:59 +0000 UTC" firstStartedPulling="2026-01-22 09:31:00.053369918 +0000 UTC m=+1229.897448981" lastFinishedPulling="2026-01-22 09:31:03.946680531 +0000 UTC m=+1233.790759594" observedRunningTime="2026-01-22 09:31:04.885029044 +0000 UTC m=+1234.729108117" watchObservedRunningTime="2026-01-22 09:31:04.889837661 +0000 UTC m=+1234.733916724" Jan 22 09:31:06 crc kubenswrapper[4892]: I0122 09:31:06.886421 4892 generic.go:334] "Generic (PLEG): container finished" podID="180a0abc-388c-4a6a-bd24-91a416481a38" containerID="f21ff5de1f4cad085f5b1c2c542fff98cd191d78b3c83e1f235e31deb9a6be5b" exitCode=0 Jan 22 09:31:06 crc kubenswrapper[4892]: I0122 09:31:06.886499 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86jc5" event={"ID":"180a0abc-388c-4a6a-bd24-91a416481a38","Type":"ContainerDied","Data":"f21ff5de1f4cad085f5b1c2c542fff98cd191d78b3c83e1f235e31deb9a6be5b"} Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.128902 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.129155 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.243728 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.378572 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btqrs\" (UniqueName: \"kubernetes.io/projected/180a0abc-388c-4a6a-bd24-91a416481a38-kube-api-access-btqrs\") pod \"180a0abc-388c-4a6a-bd24-91a416481a38\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.378706 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-config-data\") pod \"180a0abc-388c-4a6a-bd24-91a416481a38\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.379571 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-combined-ca-bundle\") pod \"180a0abc-388c-4a6a-bd24-91a416481a38\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.379652 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-scripts\") pod \"180a0abc-388c-4a6a-bd24-91a416481a38\" (UID: \"180a0abc-388c-4a6a-bd24-91a416481a38\") " Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.386043 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180a0abc-388c-4a6a-bd24-91a416481a38-kube-api-access-btqrs" (OuterVolumeSpecName: "kube-api-access-btqrs") pod "180a0abc-388c-4a6a-bd24-91a416481a38" (UID: "180a0abc-388c-4a6a-bd24-91a416481a38"). InnerVolumeSpecName "kube-api-access-btqrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.400569 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-scripts" (OuterVolumeSpecName: "scripts") pod "180a0abc-388c-4a6a-bd24-91a416481a38" (UID: "180a0abc-388c-4a6a-bd24-91a416481a38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.405533 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-config-data" (OuterVolumeSpecName: "config-data") pod "180a0abc-388c-4a6a-bd24-91a416481a38" (UID: "180a0abc-388c-4a6a-bd24-91a416481a38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.436438 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "180a0abc-388c-4a6a-bd24-91a416481a38" (UID: "180a0abc-388c-4a6a-bd24-91a416481a38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.481772 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.481806 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.481818 4892 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180a0abc-388c-4a6a-bd24-91a416481a38-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.481826 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btqrs\" (UniqueName: \"kubernetes.io/projected/180a0abc-388c-4a6a-bd24-91a416481a38-kube-api-access-btqrs\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.902671 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86jc5" event={"ID":"180a0abc-388c-4a6a-bd24-91a416481a38","Type":"ContainerDied","Data":"ea2d443a3bb9bdaa8bbda4be1e2e2710fdec9b36e97041672617635565fc7282"} Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.902707 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2d443a3bb9bdaa8bbda4be1e2e2710fdec9b36e97041672617635565fc7282" Jan 22 09:31:08 crc kubenswrapper[4892]: I0122 09:31:08.902762 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86jc5" Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.116307 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.116793 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d8d0e640-322a-4479-8603-64deae4d364a" containerName="nova-scheduler-scheduler" containerID="cri-o://6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" gracePeriod=30 Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.127874 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.132817 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.134230 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-log" containerID="cri-o://6bd32c35b0a3073557125f764e8da7f57823bdb7d2d4c27ff16d2df3009901d0" gracePeriod=30 Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.134280 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-api" containerID="cri-o://794c20db8cf23553179dfbd167c65b91cb4c05318f36e5ebd1e4d5b136595a72" gracePeriod=30 Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.139426 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.139449 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.150740 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.163036 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.200181 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.915952 4892 generic.go:334] "Generic (PLEG): container finished" podID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerID="6bd32c35b0a3073557125f764e8da7f57823bdb7d2d4c27ff16d2df3009901d0" exitCode=143 Jan 22 09:31:09 crc kubenswrapper[4892]: I0122 09:31:09.917377 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9a3ff585-adf7-489e-8987-74d52e3cbe73","Type":"ContainerDied","Data":"6bd32c35b0a3073557125f764e8da7f57823bdb7d2d4c27ff16d2df3009901d0"} Jan 22 09:31:10 crc kubenswrapper[4892]: I0122 09:31:10.102556 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 09:31:10 crc kubenswrapper[4892]: E0122 09:31:10.848055 4892 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:31:10 crc kubenswrapper[4892]: E0122 09:31:10.849367 4892 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:31:10 crc kubenswrapper[4892]: E0122 09:31:10.850361 4892 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:31:10 crc kubenswrapper[4892]: E0122 09:31:10.850418 4892 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d8d0e640-322a-4479-8603-64deae4d364a" containerName="nova-scheduler-scheduler" Jan 22 09:31:10 crc kubenswrapper[4892]: I0122 09:31:10.923994 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-metadata" containerID="cri-o://d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297" gracePeriod=30 Jan 22 09:31:10 crc kubenswrapper[4892]: I0122 09:31:10.923959 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-log" containerID="cri-o://81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764" gracePeriod=30 Jan 22 09:31:11 crc kubenswrapper[4892]: I0122 09:31:11.936639 4892 generic.go:334] "Generic (PLEG): container finished" podID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerID="81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764" exitCode=143 Jan 22 09:31:11 crc kubenswrapper[4892]: I0122 09:31:11.936718 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b5fefad-3b0d-4d26-8fe3-2e117ea96708","Type":"ContainerDied","Data":"81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764"} Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.101934 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": dial tcp 10.217.0.200:8775: connect: connection refused" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.101979 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": dial tcp 10.217.0.200:8775: connect: connection refused" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.788610 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.893489 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.910316 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-logs\") pod \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.910450 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtp94\" (UniqueName: \"kubernetes.io/projected/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-kube-api-access-rtp94\") pod \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.910492 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-combined-ca-bundle\") pod \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.910555 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-nova-metadata-tls-certs\") pod \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.910582 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-config-data\") pod \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\" (UID: \"7b5fefad-3b0d-4d26-8fe3-2e117ea96708\") " Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.911149 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-logs" (OuterVolumeSpecName: "logs") pod "7b5fefad-3b0d-4d26-8fe3-2e117ea96708" (UID: "7b5fefad-3b0d-4d26-8fe3-2e117ea96708"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.935232 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-kube-api-access-rtp94" (OuterVolumeSpecName: "kube-api-access-rtp94") pod "7b5fefad-3b0d-4d26-8fe3-2e117ea96708" (UID: "7b5fefad-3b0d-4d26-8fe3-2e117ea96708"). InnerVolumeSpecName "kube-api-access-rtp94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.951850 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b5fefad-3b0d-4d26-8fe3-2e117ea96708" (UID: "7b5fefad-3b0d-4d26-8fe3-2e117ea96708"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.956771 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-config-data" (OuterVolumeSpecName: "config-data") pod "7b5fefad-3b0d-4d26-8fe3-2e117ea96708" (UID: "7b5fefad-3b0d-4d26-8fe3-2e117ea96708"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.980040 4892 generic.go:334] "Generic (PLEG): container finished" podID="d8d0e640-322a-4479-8603-64deae4d364a" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" exitCode=0 Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.980112 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8d0e640-322a-4479-8603-64deae4d364a","Type":"ContainerDied","Data":"6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82"} Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.980143 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8d0e640-322a-4479-8603-64deae4d364a","Type":"ContainerDied","Data":"4a3b370ae5485749f3b3e15bd0a78127cad586ed31135dcad5baafdc926797f6"} Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.980161 4892 scope.go:117] "RemoveContainer" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.980265 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.990680 4892 generic.go:334] "Generic (PLEG): container finished" podID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerID="794c20db8cf23553179dfbd167c65b91cb4c05318f36e5ebd1e4d5b136595a72" exitCode=0 Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.990751 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9a3ff585-adf7-489e-8987-74d52e3cbe73","Type":"ContainerDied","Data":"794c20db8cf23553179dfbd167c65b91cb4c05318f36e5ebd1e4d5b136595a72"} Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.994541 4892 generic.go:334] "Generic (PLEG): container finished" podID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerID="d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297" exitCode=0 Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.994579 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b5fefad-3b0d-4d26-8fe3-2e117ea96708","Type":"ContainerDied","Data":"d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297"} Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.994605 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b5fefad-3b0d-4d26-8fe3-2e117ea96708","Type":"ContainerDied","Data":"291d3802b998483f55a9c2b8676cfba8ef526fcb77340977e6d18ff24912bcc9"} Jan 22 09:31:14 crc kubenswrapper[4892]: I0122 09:31:14.994669 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.008778 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7b5fefad-3b0d-4d26-8fe3-2e117ea96708" (UID: "7b5fefad-3b0d-4d26-8fe3-2e117ea96708"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.010163 4892 scope.go:117] "RemoveContainer" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.010598 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82\": container with ID starting with 6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82 not found: ID does not exist" containerID="6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.010648 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82"} err="failed to get container status \"6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82\": rpc error: code = NotFound desc = could not find container \"6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82\": container with ID starting with 6a984c5261618ed96f27e8248ce2912dd232c3326e88d821824edae0d6a7cd82 not found: ID does not exist" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.010697 4892 scope.go:117] "RemoveContainer" containerID="d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.011621 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-combined-ca-bundle\") pod \"d8d0e640-322a-4479-8603-64deae4d364a\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.011665 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-config-data\") pod \"d8d0e640-322a-4479-8603-64deae4d364a\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.011775 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztc2c\" (UniqueName: \"kubernetes.io/projected/d8d0e640-322a-4479-8603-64deae4d364a-kube-api-access-ztc2c\") pod \"d8d0e640-322a-4479-8603-64deae4d364a\" (UID: \"d8d0e640-322a-4479-8603-64deae4d364a\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.012531 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.012587 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtp94\" (UniqueName: \"kubernetes.io/projected/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-kube-api-access-rtp94\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.012600 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.012609 4892 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.012637 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5fefad-3b0d-4d26-8fe3-2e117ea96708-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.016592 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d0e640-322a-4479-8603-64deae4d364a-kube-api-access-ztc2c" (OuterVolumeSpecName: "kube-api-access-ztc2c") pod "d8d0e640-322a-4479-8603-64deae4d364a" (UID: "d8d0e640-322a-4479-8603-64deae4d364a"). InnerVolumeSpecName "kube-api-access-ztc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.028752 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.034215 4892 scope.go:117] "RemoveContainer" containerID="81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.056678 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d0e640-322a-4479-8603-64deae4d364a" (UID: "d8d0e640-322a-4479-8603-64deae4d364a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.058785 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-config-data" (OuterVolumeSpecName: "config-data") pod "d8d0e640-322a-4479-8603-64deae4d364a" (UID: "d8d0e640-322a-4479-8603-64deae4d364a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.065967 4892 scope.go:117] "RemoveContainer" containerID="d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.066411 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297\": container with ID starting with d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297 not found: ID does not exist" containerID="d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.066459 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297"} err="failed to get container status \"d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297\": rpc error: code = NotFound desc = could not find container \"d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297\": container with ID starting with d48f9cdc05bb7e9f689eafaf4d1e22abf9ea5bd1b80098dcb78db59d189dc297 not found: ID does not exist" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.066486 4892 scope.go:117] "RemoveContainer" containerID="81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.066766 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764\": container with ID starting with 81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764 not found: ID does not exist" containerID="81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.066814 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764"} err="failed to get container status \"81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764\": rpc error: code = NotFound desc = could not find container \"81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764\": container with ID starting with 81dda12ab59217dfcd20b7be8c27394200cfdaeacc9fb1d276221943ca510764 not found: ID does not exist" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114067 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-public-tls-certs\") pod \"9a3ff585-adf7-489e-8987-74d52e3cbe73\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114154 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-config-data\") pod \"9a3ff585-adf7-489e-8987-74d52e3cbe73\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114243 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-internal-tls-certs\") pod \"9a3ff585-adf7-489e-8987-74d52e3cbe73\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114275 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a3ff585-adf7-489e-8987-74d52e3cbe73-logs\") pod \"9a3ff585-adf7-489e-8987-74d52e3cbe73\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114336 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-combined-ca-bundle\") pod \"9a3ff585-adf7-489e-8987-74d52e3cbe73\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114361 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-822ws\" (UniqueName: \"kubernetes.io/projected/9a3ff585-adf7-489e-8987-74d52e3cbe73-kube-api-access-822ws\") pod \"9a3ff585-adf7-489e-8987-74d52e3cbe73\" (UID: \"9a3ff585-adf7-489e-8987-74d52e3cbe73\") " Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114967 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114986 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d0e640-322a-4479-8603-64deae4d364a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.114997 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztc2c\" (UniqueName: \"kubernetes.io/projected/d8d0e640-322a-4479-8603-64deae4d364a-kube-api-access-ztc2c\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.115731 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3ff585-adf7-489e-8987-74d52e3cbe73-logs" (OuterVolumeSpecName: "logs") pod "9a3ff585-adf7-489e-8987-74d52e3cbe73" (UID: "9a3ff585-adf7-489e-8987-74d52e3cbe73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.129737 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3ff585-adf7-489e-8987-74d52e3cbe73-kube-api-access-822ws" (OuterVolumeSpecName: "kube-api-access-822ws") pod "9a3ff585-adf7-489e-8987-74d52e3cbe73" (UID: "9a3ff585-adf7-489e-8987-74d52e3cbe73"). InnerVolumeSpecName "kube-api-access-822ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.163627 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a3ff585-adf7-489e-8987-74d52e3cbe73" (UID: "9a3ff585-adf7-489e-8987-74d52e3cbe73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.167266 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-config-data" (OuterVolumeSpecName: "config-data") pod "9a3ff585-adf7-489e-8987-74d52e3cbe73" (UID: "9a3ff585-adf7-489e-8987-74d52e3cbe73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.177896 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9a3ff585-adf7-489e-8987-74d52e3cbe73" (UID: "9a3ff585-adf7-489e-8987-74d52e3cbe73"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.192478 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9a3ff585-adf7-489e-8987-74d52e3cbe73" (UID: "9a3ff585-adf7-489e-8987-74d52e3cbe73"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.216868 4892 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.216919 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.216929 4892 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.216940 4892 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a3ff585-adf7-489e-8987-74d52e3cbe73-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.216968 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3ff585-adf7-489e-8987-74d52e3cbe73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.216977 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-822ws\" (UniqueName: \"kubernetes.io/projected/9a3ff585-adf7-489e-8987-74d52e3cbe73-kube-api-access-822ws\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.317825 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.333489 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.355328 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367020 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367562 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-api" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367586 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-api" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367604 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d0e640-322a-4479-8603-64deae4d364a" containerName="nova-scheduler-scheduler" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367615 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d0e640-322a-4479-8603-64deae4d364a" containerName="nova-scheduler-scheduler" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367641 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-log" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367650 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-log" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367662 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180a0abc-388c-4a6a-bd24-91a416481a38" containerName="nova-manage" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367670 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="180a0abc-388c-4a6a-bd24-91a416481a38" containerName="nova-manage" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367683 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-log" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367691 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-log" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367708 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310458cb-5d40-4525-a26e-0df3583401c7" containerName="init" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367716 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="310458cb-5d40-4525-a26e-0df3583401c7" containerName="init" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367746 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310458cb-5d40-4525-a26e-0df3583401c7" containerName="dnsmasq-dns" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367755 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="310458cb-5d40-4525-a26e-0df3583401c7" containerName="dnsmasq-dns" Jan 22 09:31:15 crc kubenswrapper[4892]: E0122 09:31:15.367765 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-metadata" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367773 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-metadata" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.367987 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="180a0abc-388c-4a6a-bd24-91a416481a38" containerName="nova-manage" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368008 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="310458cb-5d40-4525-a26e-0df3583401c7" containerName="dnsmasq-dns" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368020 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-api" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368033 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-metadata" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368051 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" containerName="nova-metadata-log" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368070 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d0e640-322a-4479-8603-64deae4d364a" containerName="nova-scheduler-scheduler" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368088 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" containerName="nova-api-log" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.368857 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.372508 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.384972 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.394951 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.405455 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.407038 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.409595 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.409987 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.415588 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.432658 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5fefad-3b0d-4d26-8fe3-2e117ea96708" path="/var/lib/kubelet/pods/7b5fefad-3b0d-4d26-8fe3-2e117ea96708/volumes" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.433410 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d0e640-322a-4479-8603-64deae4d364a" path="/var/lib/kubelet/pods/d8d0e640-322a-4479-8603-64deae4d364a/volumes" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522008 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blv2w\" (UniqueName: \"kubernetes.io/projected/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-kube-api-access-blv2w\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522053 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522122 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-logs\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522201 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-config-data\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522248 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27216438-d79d-4606-8ac6-6636fc9b6e06-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522294 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27216438-d79d-4606-8ac6-6636fc9b6e06-config-data\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522350 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.522414 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qhk\" (UniqueName: \"kubernetes.io/projected/27216438-d79d-4606-8ac6-6636fc9b6e06-kube-api-access-87qhk\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.623868 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-logs\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.623930 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-config-data\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.623970 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27216438-d79d-4606-8ac6-6636fc9b6e06-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.623999 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27216438-d79d-4606-8ac6-6636fc9b6e06-config-data\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.624076 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.624156 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qhk\" (UniqueName: \"kubernetes.io/projected/27216438-d79d-4606-8ac6-6636fc9b6e06-kube-api-access-87qhk\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.624256 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-logs\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.624275 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blv2w\" (UniqueName: \"kubernetes.io/projected/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-kube-api-access-blv2w\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.624343 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.627719 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-config-data\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.632874 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27216438-d79d-4606-8ac6-6636fc9b6e06-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.633042 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27216438-d79d-4606-8ac6-6636fc9b6e06-config-data\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.637643 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.639164 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.640166 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qhk\" (UniqueName: \"kubernetes.io/projected/27216438-d79d-4606-8ac6-6636fc9b6e06-kube-api-access-87qhk\") pod \"nova-scheduler-0\" (UID: \"27216438-d79d-4606-8ac6-6636fc9b6e06\") " pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.641016 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blv2w\" (UniqueName: \"kubernetes.io/projected/51bca3ef-0b5c-4c51-bf42-95ad11eba3be-kube-api-access-blv2w\") pod \"nova-metadata-0\" (UID: \"51bca3ef-0b5c-4c51-bf42-95ad11eba3be\") " pod="openstack/nova-metadata-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.693881 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:31:15 crc kubenswrapper[4892]: I0122 09:31:15.729501 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.014330 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.014323 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9a3ff585-adf7-489e-8987-74d52e3cbe73","Type":"ContainerDied","Data":"3a6ae3f28d965014c61dd766cec3a053a866e92d4893255b5b3173dd30dabc8f"} Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.014709 4892 scope.go:117] "RemoveContainer" containerID="794c20db8cf23553179dfbd167c65b91cb4c05318f36e5ebd1e4d5b136595a72" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.046311 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.053570 4892 scope.go:117] "RemoveContainer" containerID="6bd32c35b0a3073557125f764e8da7f57823bdb7d2d4c27ff16d2df3009901d0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.060620 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.071440 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.073301 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.101331 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.101651 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.101815 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.108571 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.161796 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.241129 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.241201 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0b7944-3391-4c47-91a6-47c3aa62442a-logs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.241224 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-public-tls-certs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.241265 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ndb\" (UniqueName: \"kubernetes.io/projected/fb0b7944-3391-4c47-91a6-47c3aa62442a-kube-api-access-b8ndb\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.241304 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.241367 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-config-data\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.287544 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: W0122 09:31:16.298396 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51bca3ef_0b5c_4c51_bf42_95ad11eba3be.slice/crio-c5b3a2c5e31fa915cb7abf2fc0684529fe12d063ca816c51fe90d495db2611f0 WatchSource:0}: Error finding container c5b3a2c5e31fa915cb7abf2fc0684529fe12d063ca816c51fe90d495db2611f0: Status 404 returned error can't find the container with id c5b3a2c5e31fa915cb7abf2fc0684529fe12d063ca816c51fe90d495db2611f0 Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.323444 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.323501 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.323547 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.324235 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31a997f31663709d14ae5efb219a31b8ac9b066d6e93055a348ee5203f0f3774"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.324320 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://31a997f31663709d14ae5efb219a31b8ac9b066d6e93055a348ee5203f0f3774" gracePeriod=600 Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.342834 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-config-data\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.342960 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.343021 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0b7944-3391-4c47-91a6-47c3aa62442a-logs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.343051 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-public-tls-certs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.343090 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ndb\" (UniqueName: \"kubernetes.io/projected/fb0b7944-3391-4c47-91a6-47c3aa62442a-kube-api-access-b8ndb\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.343125 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.343469 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0b7944-3391-4c47-91a6-47c3aa62442a-logs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.346476 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-public-tls-certs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.346672 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.347045 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.348353 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0b7944-3391-4c47-91a6-47c3aa62442a-config-data\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.360957 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ndb\" (UniqueName: \"kubernetes.io/projected/fb0b7944-3391-4c47-91a6-47c3aa62442a-kube-api-access-b8ndb\") pod \"nova-api-0\" (UID: \"fb0b7944-3391-4c47-91a6-47c3aa62442a\") " pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.402353 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:31:16 crc kubenswrapper[4892]: I0122 09:31:16.859256 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:31:16 crc kubenswrapper[4892]: W0122 09:31:16.866665 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb0b7944_3391_4c47_91a6_47c3aa62442a.slice/crio-7f934734ca0a890981cefe062f799aa5eb883cbabd66e1e1edf46c821ee9c683 WatchSource:0}: Error finding container 7f934734ca0a890981cefe062f799aa5eb883cbabd66e1e1edf46c821ee9c683: Status 404 returned error can't find the container with id 7f934734ca0a890981cefe062f799aa5eb883cbabd66e1e1edf46c821ee9c683 Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.027851 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb0b7944-3391-4c47-91a6-47c3aa62442a","Type":"ContainerStarted","Data":"89c332bf73fd764ecd4d9801bc70f813cdb663b0f5cc91825db402c1ed16500d"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.027894 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb0b7944-3391-4c47-91a6-47c3aa62442a","Type":"ContainerStarted","Data":"7f934734ca0a890981cefe062f799aa5eb883cbabd66e1e1edf46c821ee9c683"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.030026 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51bca3ef-0b5c-4c51-bf42-95ad11eba3be","Type":"ContainerStarted","Data":"727963c672e2f475ac6524621367524550070d91a087e290c3e94e5b83ece6c3"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.030084 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51bca3ef-0b5c-4c51-bf42-95ad11eba3be","Type":"ContainerStarted","Data":"f4514a27a35d9e6e7a7dc985a97b8d3bd037bc442fe4f49654c52edd35e07d06"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.030105 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51bca3ef-0b5c-4c51-bf42-95ad11eba3be","Type":"ContainerStarted","Data":"c5b3a2c5e31fa915cb7abf2fc0684529fe12d063ca816c51fe90d495db2611f0"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.031407 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27216438-d79d-4606-8ac6-6636fc9b6e06","Type":"ContainerStarted","Data":"08dc3116987d98bb68c1fcc03310e43cdc07586a84e192d1a1beaf3c6610371f"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.031453 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27216438-d79d-4606-8ac6-6636fc9b6e06","Type":"ContainerStarted","Data":"18293b280512440ad7a7b3537cf2e82709d2ade9a0cc78b8994d63f63b791935"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.035544 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="31a997f31663709d14ae5efb219a31b8ac9b066d6e93055a348ee5203f0f3774" exitCode=0 Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.035628 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"31a997f31663709d14ae5efb219a31b8ac9b066d6e93055a348ee5203f0f3774"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.035688 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"78a6433c45938fca7e3f01a04a252af5d76315e063c7dbe1b1f8aa3b3903b18b"} Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.035708 4892 scope.go:117] "RemoveContainer" containerID="117e0c1b92dcf102d5c4006956ffbc6d1b9e2073ac26c26fea7a169bb0945ba2" Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.059784 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.059761615 podStartE2EDuration="2.059761615s" podCreationTimestamp="2026-01-22 09:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:31:17.048403588 +0000 UTC m=+1246.892482661" watchObservedRunningTime="2026-01-22 09:31:17.059761615 +0000 UTC m=+1246.903840688" Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.088941 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.088918567 podStartE2EDuration="2.088918567s" podCreationTimestamp="2026-01-22 09:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:31:17.078842451 +0000 UTC m=+1246.922921524" watchObservedRunningTime="2026-01-22 09:31:17.088918567 +0000 UTC m=+1246.932997650" Jan 22 09:31:17 crc kubenswrapper[4892]: I0122 09:31:17.435088 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3ff585-adf7-489e-8987-74d52e3cbe73" path="/var/lib/kubelet/pods/9a3ff585-adf7-489e-8987-74d52e3cbe73/volumes" Jan 22 09:31:18 crc kubenswrapper[4892]: I0122 09:31:18.082607 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb0b7944-3391-4c47-91a6-47c3aa62442a","Type":"ContainerStarted","Data":"2af94de71dde5febf948cfd150e1784d272e638d686f5e1086a9ea7cb7f4af87"} Jan 22 09:31:20 crc kubenswrapper[4892]: I0122 09:31:20.695009 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 09:31:20 crc kubenswrapper[4892]: I0122 09:31:20.729715 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:31:20 crc kubenswrapper[4892]: I0122 09:31:20.729773 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:31:25 crc kubenswrapper[4892]: I0122 09:31:25.695411 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 09:31:25 crc kubenswrapper[4892]: I0122 09:31:25.729942 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:31:25 crc kubenswrapper[4892]: I0122 09:31:25.729986 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:31:25 crc kubenswrapper[4892]: I0122 09:31:25.746345 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 09:31:25 crc kubenswrapper[4892]: I0122 09:31:25.776158 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=9.77614178 podStartE2EDuration="9.77614178s" podCreationTimestamp="2026-01-22 09:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:31:18.142436461 +0000 UTC m=+1247.986515544" watchObservedRunningTime="2026-01-22 09:31:25.77614178 +0000 UTC m=+1255.620220843" Jan 22 09:31:26 crc kubenswrapper[4892]: I0122 09:31:26.216269 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 09:31:26 crc kubenswrapper[4892]: I0122 09:31:26.402961 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:31:26 crc kubenswrapper[4892]: I0122 09:31:26.403029 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:31:26 crc kubenswrapper[4892]: I0122 09:31:26.746604 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="51bca3ef-0b5c-4c51-bf42-95ad11eba3be" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:26 crc kubenswrapper[4892]: I0122 09:31:26.746601 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="51bca3ef-0b5c-4c51-bf42-95ad11eba3be" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:27 crc kubenswrapper[4892]: I0122 09:31:27.419070 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb0b7944-3391-4c47-91a6-47c3aa62442a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:27 crc kubenswrapper[4892]: I0122 09:31:27.419081 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb0b7944-3391-4c47-91a6-47c3aa62442a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:31:29 crc kubenswrapper[4892]: I0122 09:31:29.575928 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 09:31:35 crc kubenswrapper[4892]: I0122 09:31:35.736244 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 09:31:35 crc kubenswrapper[4892]: I0122 09:31:35.744753 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 09:31:35 crc kubenswrapper[4892]: I0122 09:31:35.747903 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.260619 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.410546 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.410621 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.411694 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.411755 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.425844 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:31:36 crc kubenswrapper[4892]: I0122 09:31:36.425920 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:31:44 crc kubenswrapper[4892]: I0122 09:31:44.276442 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:31:45 crc kubenswrapper[4892]: I0122 09:31:45.104357 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:31:48 crc kubenswrapper[4892]: I0122 09:31:48.850676 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="rabbitmq" containerID="cri-o://761cbc2ad31d8c772853deace9c46eb9472b5e14da71aff569f880d3995af45e" gracePeriod=604796 Jan 22 09:31:49 crc kubenswrapper[4892]: I0122 09:31:49.482553 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="rabbitmq" containerID="cri-o://0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958" gracePeriod=604796 Jan 22 09:31:51 crc kubenswrapper[4892]: I0122 09:31:51.634596 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 22 09:31:51 crc kubenswrapper[4892]: I0122 09:31:51.648581 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.421633 4892 generic.go:334] "Generic (PLEG): container finished" podID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerID="761cbc2ad31d8c772853deace9c46eb9472b5e14da71aff569f880d3995af45e" exitCode=0 Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.430909 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3106222-75cd-4011-a7d0-33a3d39e3f0c","Type":"ContainerDied","Data":"761cbc2ad31d8c772853deace9c46eb9472b5e14da71aff569f880d3995af45e"} Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.430954 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3106222-75cd-4011-a7d0-33a3d39e3f0c","Type":"ContainerDied","Data":"10967b58bf59e20d3268bafd761c8a3de605d38f2c4a2a8710091816b4380699"} Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.430969 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10967b58bf59e20d3268bafd761c8a3de605d38f2c4a2a8710091816b4380699" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.498063 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.685487 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.685803 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-tls\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686011 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3106222-75cd-4011-a7d0-33a3d39e3f0c-pod-info\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686126 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-server-conf\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686221 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-confd\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686333 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3106222-75cd-4011-a7d0-33a3d39e3f0c-erlang-cookie-secret\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686453 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-plugins\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686548 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-config-data\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686617 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-plugins-conf\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686737 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkwqm\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-kube-api-access-bkwqm\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.686864 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-erlang-cookie\") pod \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\" (UID: \"c3106222-75cd-4011-a7d0-33a3d39e3f0c\") " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.687158 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.687212 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.687416 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.687486 4892 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.687748 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.694200 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.694260 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.694410 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3106222-75cd-4011-a7d0-33a3d39e3f0c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.695391 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-kube-api-access-bkwqm" (OuterVolumeSpecName: "kube-api-access-bkwqm") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "kube-api-access-bkwqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.738855 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c3106222-75cd-4011-a7d0-33a3d39e3f0c-pod-info" (OuterVolumeSpecName: "pod-info") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.764898 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-config-data" (OuterVolumeSpecName: "config-data") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790160 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790223 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790239 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790326 4892 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3106222-75cd-4011-a7d0-33a3d39e3f0c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790342 4892 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3106222-75cd-4011-a7d0-33a3d39e3f0c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790353 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.790366 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkwqm\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-kube-api-access-bkwqm\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.808555 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-server-conf" (OuterVolumeSpecName: "server-conf") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.820499 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.829128 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c3106222-75cd-4011-a7d0-33a3d39e3f0c" (UID: "c3106222-75cd-4011-a7d0-33a3d39e3f0c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.892950 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.892983 4892 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3106222-75cd-4011-a7d0-33a3d39e3f0c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:55 crc kubenswrapper[4892]: I0122 09:31:55.892994 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3106222-75cd-4011-a7d0-33a3d39e3f0c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.035868 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.198115 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-confd\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.198511 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-erlang-cookie\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.198662 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-erlang-cookie-secret\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.198790 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k24b5\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-kube-api-access-k24b5\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.198891 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-pod-info\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.198976 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-server-conf\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199125 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-plugins-conf\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199227 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199325 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-config-data\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199434 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-plugins\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199537 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-tls\") pod \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\" (UID: \"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f\") " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199338 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199780 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.199997 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.200586 4892 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.200711 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.200820 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.204945 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-pod-info" (OuterVolumeSpecName: "pod-info") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.205258 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.206609 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.207108 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.210504 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-kube-api-access-k24b5" (OuterVolumeSpecName: "kube-api-access-k24b5") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "kube-api-access-k24b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.229214 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-config-data" (OuterVolumeSpecName: "config-data") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.275882 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-server-conf" (OuterVolumeSpecName: "server-conf") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302152 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302185 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302194 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302205 4892 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302214 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k24b5\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-kube-api-access-k24b5\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302222 4892 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.302231 4892 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.327587 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.344382 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" (UID: "ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.404229 4892 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.404271 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.433965 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.434041 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerID="0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958" exitCode=0 Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.434098 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f","Type":"ContainerDied","Data":"0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958"} Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.434135 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f","Type":"ContainerDied","Data":"98ccf255e72579a1af647cf1ff20f0e9fd7f261d6c4679aaa0865186cae5adf2"} Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.434165 4892 scope.go:117] "RemoveContainer" containerID="0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.434275 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.482583 4892 scope.go:117] "RemoveContainer" containerID="5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.494529 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.514454 4892 scope.go:117] "RemoveContainer" containerID="0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958" Jan 22 09:31:56 crc kubenswrapper[4892]: E0122 09:31:56.519429 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958\": container with ID starting with 0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958 not found: ID does not exist" containerID="0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.519474 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958"} err="failed to get container status \"0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958\": rpc error: code = NotFound desc = could not find container \"0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958\": container with ID starting with 0ac4cc9da7af0c64469774609d98e8d5f0e4f4eb8ef412f9a43e413357fde958 not found: ID does not exist" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.519501 4892 scope.go:117] "RemoveContainer" containerID="5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c" Jan 22 09:31:56 crc kubenswrapper[4892]: E0122 09:31:56.523398 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c\": container with ID starting with 5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c not found: ID does not exist" containerID="5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.523439 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c"} err="failed to get container status \"5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c\": rpc error: code = NotFound desc = could not find container \"5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c\": container with ID starting with 5e4f7488273b1bcc89697c8644593504782d89df4d3e37c391c0d53e1f87559c not found: ID does not exist" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.527701 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.540342 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.569350 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.603346 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: E0122 09:31:56.603780 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="setup-container" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.603797 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="setup-container" Jan 22 09:31:56 crc kubenswrapper[4892]: E0122 09:31:56.603823 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="rabbitmq" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.603831 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="rabbitmq" Jan 22 09:31:56 crc kubenswrapper[4892]: E0122 09:31:56.603840 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="setup-container" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.603846 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="setup-container" Jan 22 09:31:56 crc kubenswrapper[4892]: E0122 09:31:56.603867 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="rabbitmq" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.603873 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="rabbitmq" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.604044 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" containerName="rabbitmq" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.604056 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" containerName="rabbitmq" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.605016 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.618343 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.620034 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.632218 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.632240 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.632468 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.635727 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.638632 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.639018 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.639629 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.639830 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.639872 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.640503 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.642511 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.643338 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4n6tb" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.643633 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.648140 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5s4nt" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.648572 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.710628 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711049 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711187 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711359 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57552917-a09b-4f52-96b5-c7749b9af779-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711483 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-config-data\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711613 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711741 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711855 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.711973 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.712158 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.712310 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6rj\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-kube-api-access-bh6rj\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.712456 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.712572 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57552917-a09b-4f52-96b5-c7749b9af779-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.712776 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.712901 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713017 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713197 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713335 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713490 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713663 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713796 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7qx\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-kube-api-access-7w7qx\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.713944 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.806322 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.815880 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.815923 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.815945 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.815988 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57552917-a09b-4f52-96b5-c7749b9af779-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816009 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-config-data\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816029 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816047 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816061 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816078 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816107 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816137 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh6rj\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-kube-api-access-bh6rj\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816161 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816178 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57552917-a09b-4f52-96b5-c7749b9af779-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816202 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816197 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816221 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816246 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816374 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816394 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816416 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816450 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816467 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7qx\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-kube-api-access-7w7qx\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816485 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.816993 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.817937 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.818749 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-config-data\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.819102 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.820219 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.820409 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.820502 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.820813 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.821942 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.826083 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57552917-a09b-4f52-96b5-c7749b9af779-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.826349 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.826584 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.832322 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.839500 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57552917-a09b-4f52-96b5-c7749b9af779-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.839624 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57552917-a09b-4f52-96b5-c7749b9af779-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.839696 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.839817 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.840277 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.840759 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.848550 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh6rj\" (UniqueName: \"kubernetes.io/projected/30fa58bc-46e3-40c4-ad73-3f2e1f8341dd-kube-api-access-bh6rj\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.855221 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7qx\" (UniqueName: \"kubernetes.io/projected/57552917-a09b-4f52-96b5-c7749b9af779-kube-api-access-7w7qx\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.867125 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57552917-a09b-4f52-96b5-c7749b9af779\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.885412 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd\") " pod="openstack/rabbitmq-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.955239 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:31:56 crc kubenswrapper[4892]: I0122 09:31:56.964925 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.433178 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3106222-75cd-4011-a7d0-33a3d39e3f0c" path="/var/lib/kubelet/pods/c3106222-75cd-4011-a7d0-33a3d39e3f0c/volumes" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.434277 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f" path="/var/lib/kubelet/pods/ef2d11ba-fdb4-4ade-af1b-59dae1b1d10f/volumes" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.472692 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.530226 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-6wmk6"] Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.568159 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-6wmk6"] Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.568297 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.570505 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.588093 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.635676 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.635856 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z72l\" (UniqueName: \"kubernetes.io/projected/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-kube-api-access-8z72l\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.636013 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-config\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.636074 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.636119 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.636167 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.636200 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738159 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738245 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738317 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738349 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738448 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738489 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z72l\" (UniqueName: \"kubernetes.io/projected/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-kube-api-access-8z72l\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738572 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-config\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.738999 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-svc\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.739517 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-config\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.739572 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.740160 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.740317 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.741214 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-openstack-edpm-ipam\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.756567 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z72l\" (UniqueName: \"kubernetes.io/projected/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-kube-api-access-8z72l\") pod \"dnsmasq-dns-668b55cdd7-6wmk6\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:57 crc kubenswrapper[4892]: I0122 09:31:57.978916 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:31:58 crc kubenswrapper[4892]: I0122 09:31:58.233358 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-6wmk6"] Jan 22 09:31:58 crc kubenswrapper[4892]: W0122 09:31:58.238207 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd31439_cd48_4ff3_8b4a_444c5e40cc9b.slice/crio-784454ea7e6c18279cd715bc71bad31da7892a145dec83ca1e25f844660fded3 WatchSource:0}: Error finding container 784454ea7e6c18279cd715bc71bad31da7892a145dec83ca1e25f844660fded3: Status 404 returned error can't find the container with id 784454ea7e6c18279cd715bc71bad31da7892a145dec83ca1e25f844660fded3 Jan 22 09:31:58 crc kubenswrapper[4892]: I0122 09:31:58.456424 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" event={"ID":"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b","Type":"ContainerStarted","Data":"784454ea7e6c18279cd715bc71bad31da7892a145dec83ca1e25f844660fded3"} Jan 22 09:31:58 crc kubenswrapper[4892]: I0122 09:31:58.459333 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57552917-a09b-4f52-96b5-c7749b9af779","Type":"ContainerStarted","Data":"0afabc453d741144c86e7fd443d59df9edced597f7ba0610fa1c3a9a7ace868f"} Jan 22 09:31:58 crc kubenswrapper[4892]: I0122 09:31:58.466605 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd","Type":"ContainerStarted","Data":"b975713591286945bae5b131eb8ccd93324879e6b82172f510e3377886f75f82"} Jan 22 09:31:59 crc kubenswrapper[4892]: I0122 09:31:59.483900 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd","Type":"ContainerStarted","Data":"f5283fb0d536ad98b8e6714b551adc32cd90054d386ce61b5a0c9022bd402474"} Jan 22 09:31:59 crc kubenswrapper[4892]: I0122 09:31:59.486250 4892 generic.go:334] "Generic (PLEG): container finished" podID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerID="6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa" exitCode=0 Jan 22 09:31:59 crc kubenswrapper[4892]: I0122 09:31:59.486342 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" event={"ID":"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b","Type":"ContainerDied","Data":"6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa"} Jan 22 09:31:59 crc kubenswrapper[4892]: I0122 09:31:59.488917 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57552917-a09b-4f52-96b5-c7749b9af779","Type":"ContainerStarted","Data":"69c4846ab8535600513e91727aa9e54c309d80943a9a5c2223a40cc0d2315c1d"} Jan 22 09:32:00 crc kubenswrapper[4892]: I0122 09:32:00.506231 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" event={"ID":"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b","Type":"ContainerStarted","Data":"6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155"} Jan 22 09:32:00 crc kubenswrapper[4892]: I0122 09:32:00.538861 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" podStartSLOduration=3.538842371 podStartE2EDuration="3.538842371s" podCreationTimestamp="2026-01-22 09:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:32:00.53510701 +0000 UTC m=+1290.379186113" watchObservedRunningTime="2026-01-22 09:32:00.538842371 +0000 UTC m=+1290.382921444" Jan 22 09:32:01 crc kubenswrapper[4892]: I0122 09:32:01.515647 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:32:07 crc kubenswrapper[4892]: I0122 09:32:07.980585 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.047938 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-rdw7w"] Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.048194 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerName="dnsmasq-dns" containerID="cri-o://ade6274ed31e4c12f5e3b614cb123e1a8eb78d2efa08a847db314714639be0cf" gracePeriod=10 Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.245713 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-nt7cg"] Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.247636 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.267227 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-nt7cg"] Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371401 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371466 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371505 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-config\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371536 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371575 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371634 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tst\" (UniqueName: \"kubernetes.io/projected/e4d2f9f5-3308-487a-871d-b411f6951ead-kube-api-access-z4tst\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.371696 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474444 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tst\" (UniqueName: \"kubernetes.io/projected/e4d2f9f5-3308-487a-871d-b411f6951ead-kube-api-access-z4tst\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474556 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474597 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474638 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474679 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-config\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474704 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:08 crc kubenswrapper[4892]: I0122 09:32:08.474737 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.476193 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-openstack-edpm-ipam\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.476200 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-ovsdbserver-nb\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.476446 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-dns-swift-storage-0\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.476780 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-ovsdbserver-sb\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.476898 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-dns-svc\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.477634 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d2f9f5-3308-487a-871d-b411f6951ead-config\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.506049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tst\" (UniqueName: \"kubernetes.io/projected/e4d2f9f5-3308-487a-871d-b411f6951ead-kube-api-access-z4tst\") pod \"dnsmasq-dns-66fc59ccbf-nt7cg\" (UID: \"e4d2f9f5-3308-487a-871d-b411f6951ead\") " pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.567839 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.588644 4892 generic.go:334] "Generic (PLEG): container finished" podID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerID="ade6274ed31e4c12f5e3b614cb123e1a8eb78d2efa08a847db314714639be0cf" exitCode=0 Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.588679 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" event={"ID":"ce583610-456f-400a-b840-9d2c65ecf4a6","Type":"ContainerDied","Data":"ade6274ed31e4c12f5e3b614cb123e1a8eb78d2efa08a847db314714639be0cf"} Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.588703 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" event={"ID":"ce583610-456f-400a-b840-9d2c65ecf4a6","Type":"ContainerDied","Data":"48c2d0fdfa9849cf575c0e48d34a275a6fbb901f48e60c30a5efd3dae0c34fc4"} Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.588714 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c2d0fdfa9849cf575c0e48d34a275a6fbb901f48e60c30a5efd3dae0c34fc4" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.689765 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.784758 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-sb\") pod \"ce583610-456f-400a-b840-9d2c65ecf4a6\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.784808 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-swift-storage-0\") pod \"ce583610-456f-400a-b840-9d2c65ecf4a6\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.784916 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-svc\") pod \"ce583610-456f-400a-b840-9d2c65ecf4a6\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.784960 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-config\") pod \"ce583610-456f-400a-b840-9d2c65ecf4a6\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.784988 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mssvm\" (UniqueName: \"kubernetes.io/projected/ce583610-456f-400a-b840-9d2c65ecf4a6-kube-api-access-mssvm\") pod \"ce583610-456f-400a-b840-9d2c65ecf4a6\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.785188 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-nb\") pod \"ce583610-456f-400a-b840-9d2c65ecf4a6\" (UID: \"ce583610-456f-400a-b840-9d2c65ecf4a6\") " Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.790369 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce583610-456f-400a-b840-9d2c65ecf4a6-kube-api-access-mssvm" (OuterVolumeSpecName: "kube-api-access-mssvm") pod "ce583610-456f-400a-b840-9d2c65ecf4a6" (UID: "ce583610-456f-400a-b840-9d2c65ecf4a6"). InnerVolumeSpecName "kube-api-access-mssvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.849071 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce583610-456f-400a-b840-9d2c65ecf4a6" (UID: "ce583610-456f-400a-b840-9d2c65ecf4a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.856962 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce583610-456f-400a-b840-9d2c65ecf4a6" (UID: "ce583610-456f-400a-b840-9d2c65ecf4a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.857502 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-config" (OuterVolumeSpecName: "config") pod "ce583610-456f-400a-b840-9d2c65ecf4a6" (UID: "ce583610-456f-400a-b840-9d2c65ecf4a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.859882 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce583610-456f-400a-b840-9d2c65ecf4a6" (UID: "ce583610-456f-400a-b840-9d2c65ecf4a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.867465 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce583610-456f-400a-b840-9d2c65ecf4a6" (UID: "ce583610-456f-400a-b840-9d2c65ecf4a6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.887462 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.887492 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.887502 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.887513 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.887522 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce583610-456f-400a-b840-9d2c65ecf4a6-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:08.887531 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mssvm\" (UniqueName: \"kubernetes.io/projected/ce583610-456f-400a-b840-9d2c65ecf4a6-kube-api-access-mssvm\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.267084 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fc59ccbf-nt7cg"] Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.609525 4892 generic.go:334] "Generic (PLEG): container finished" podID="e4d2f9f5-3308-487a-871d-b411f6951ead" containerID="6e15ec46e1a3cf460cbbcddddae24317a13bd390d04cefaa39bb2d239020b0b4" exitCode=0 Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.609924 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-rdw7w" Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.610462 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" event={"ID":"e4d2f9f5-3308-487a-871d-b411f6951ead","Type":"ContainerDied","Data":"6e15ec46e1a3cf460cbbcddddae24317a13bd390d04cefaa39bb2d239020b0b4"} Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.610523 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" event={"ID":"e4d2f9f5-3308-487a-871d-b411f6951ead","Type":"ContainerStarted","Data":"1ed12c34eb6f6191095582c9fa0bb4025bab96126ec13eab3dec1b2de7f35912"} Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.655746 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-rdw7w"] Jan 22 09:32:09 crc kubenswrapper[4892]: I0122 09:32:09.667303 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-rdw7w"] Jan 22 09:32:10 crc kubenswrapper[4892]: I0122 09:32:10.619801 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" event={"ID":"e4d2f9f5-3308-487a-871d-b411f6951ead","Type":"ContainerStarted","Data":"27df97c89ed6ec2be7eeaa6335c35d0f3c289dee520f505f3fa948ca59afe4cd"} Jan 22 09:32:10 crc kubenswrapper[4892]: I0122 09:32:10.620183 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:10 crc kubenswrapper[4892]: I0122 09:32:10.642051 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" podStartSLOduration=2.642033207 podStartE2EDuration="2.642033207s" podCreationTimestamp="2026-01-22 09:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:32:10.634881263 +0000 UTC m=+1300.478960326" watchObservedRunningTime="2026-01-22 09:32:10.642033207 +0000 UTC m=+1300.486112270" Jan 22 09:32:11 crc kubenswrapper[4892]: I0122 09:32:11.437510 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" path="/var/lib/kubelet/pods/ce583610-456f-400a-b840-9d2c65ecf4a6/volumes" Jan 22 09:32:18 crc kubenswrapper[4892]: I0122 09:32:18.570146 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66fc59ccbf-nt7cg" Jan 22 09:32:18 crc kubenswrapper[4892]: I0122 09:32:18.639640 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-6wmk6"] Jan 22 09:32:18 crc kubenswrapper[4892]: I0122 09:32:18.639897 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerName="dnsmasq-dns" containerID="cri-o://6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155" gracePeriod=10 Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.100769 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299565 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-config\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299673 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-nb\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299695 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-svc\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299735 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-swift-storage-0\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299826 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-openstack-edpm-ipam\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299859 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-sb\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.299914 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z72l\" (UniqueName: \"kubernetes.io/projected/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-kube-api-access-8z72l\") pod \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\" (UID: \"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b\") " Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.307364 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-kube-api-access-8z72l" (OuterVolumeSpecName: "kube-api-access-8z72l") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "kube-api-access-8z72l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.350327 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-config" (OuterVolumeSpecName: "config") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.354768 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.355631 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.367897 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.374919 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.379579 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" (UID: "fdd31439-cd48-4ff3-8b4a-444c5e40cc9b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402595 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402624 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402634 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z72l\" (UniqueName: \"kubernetes.io/projected/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-kube-api-access-8z72l\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402646 4892 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402654 4892 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402664 4892 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.402672 4892 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.704517 4892 generic.go:334] "Generic (PLEG): container finished" podID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerID="6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155" exitCode=0 Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.704566 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" event={"ID":"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b","Type":"ContainerDied","Data":"6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155"} Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.704593 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" event={"ID":"fdd31439-cd48-4ff3-8b4a-444c5e40cc9b","Type":"ContainerDied","Data":"784454ea7e6c18279cd715bc71bad31da7892a145dec83ca1e25f844660fded3"} Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.704609 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668b55cdd7-6wmk6" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.704613 4892 scope.go:117] "RemoveContainer" containerID="6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.763068 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-6wmk6"] Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.774778 4892 scope.go:117] "RemoveContainer" containerID="6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.783138 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668b55cdd7-6wmk6"] Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.806562 4892 scope.go:117] "RemoveContainer" containerID="6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155" Jan 22 09:32:19 crc kubenswrapper[4892]: E0122 09:32:19.807090 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155\": container with ID starting with 6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155 not found: ID does not exist" containerID="6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.807152 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155"} err="failed to get container status \"6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155\": rpc error: code = NotFound desc = could not find container \"6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155\": container with ID starting with 6d521226082cd01454e64e50ea9da5ed00dc29840dbb326a86cf6241eca93155 not found: ID does not exist" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.807188 4892 scope.go:117] "RemoveContainer" containerID="6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa" Jan 22 09:32:19 crc kubenswrapper[4892]: E0122 09:32:19.807675 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa\": container with ID starting with 6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa not found: ID does not exist" containerID="6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa" Jan 22 09:32:19 crc kubenswrapper[4892]: I0122 09:32:19.807724 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa"} err="failed to get container status \"6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa\": rpc error: code = NotFound desc = could not find container \"6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa\": container with ID starting with 6b5d7bf96e90dfd3688ad38f214ee543ed393acea267693a137277a763df70fa not found: ID does not exist" Jan 22 09:32:21 crc kubenswrapper[4892]: I0122 09:32:21.427649 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" path="/var/lib/kubelet/pods/fdd31439-cd48-4ff3-8b4a-444c5e40cc9b/volumes" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.389971 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5"] Jan 22 09:32:31 crc kubenswrapper[4892]: E0122 09:32:31.390967 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerName="dnsmasq-dns" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.390983 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerName="dnsmasq-dns" Jan 22 09:32:31 crc kubenswrapper[4892]: E0122 09:32:31.391012 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerName="init" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.391020 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerName="init" Jan 22 09:32:31 crc kubenswrapper[4892]: E0122 09:32:31.391031 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerName="init" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.391040 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerName="init" Jan 22 09:32:31 crc kubenswrapper[4892]: E0122 09:32:31.391060 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerName="dnsmasq-dns" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.391068 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerName="dnsmasq-dns" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.391321 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd31439-cd48-4ff3-8b4a-444c5e40cc9b" containerName="dnsmasq-dns" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.391346 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce583610-456f-400a-b840-9d2c65ecf4a6" containerName="dnsmasq-dns" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.392076 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.394119 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.394532 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.394195 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.395916 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.402511 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5"] Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.428392 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrfl\" (UniqueName: \"kubernetes.io/projected/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-kube-api-access-krrfl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.428489 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.428537 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.428572 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.529820 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.529963 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrfl\" (UniqueName: \"kubernetes.io/projected/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-kube-api-access-krrfl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.530056 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.530110 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.537882 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.538094 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.538158 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.546506 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrfl\" (UniqueName: \"kubernetes.io/projected/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-kube-api-access-krrfl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.709258 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.848082 4892 generic.go:334] "Generic (PLEG): container finished" podID="57552917-a09b-4f52-96b5-c7749b9af779" containerID="69c4846ab8535600513e91727aa9e54c309d80943a9a5c2223a40cc0d2315c1d" exitCode=0 Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.848560 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57552917-a09b-4f52-96b5-c7749b9af779","Type":"ContainerDied","Data":"69c4846ab8535600513e91727aa9e54c309d80943a9a5c2223a40cc0d2315c1d"} Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.850650 4892 generic.go:334] "Generic (PLEG): container finished" podID="30fa58bc-46e3-40c4-ad73-3f2e1f8341dd" containerID="f5283fb0d536ad98b8e6714b551adc32cd90054d386ce61b5a0c9022bd402474" exitCode=0 Jan 22 09:32:31 crc kubenswrapper[4892]: I0122 09:32:31.850680 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd","Type":"ContainerDied","Data":"f5283fb0d536ad98b8e6714b551adc32cd90054d386ce61b5a0c9022bd402474"} Jan 22 09:32:32 crc kubenswrapper[4892]: W0122 09:32:32.265132 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb695bf_11e7_478a_a348_2a06ef0bcdaf.slice/crio-b9a95483b270afdb747cca5a99ed4fa4fbd5c3b30a704a7aaf863fd1a33838bc WatchSource:0}: Error finding container b9a95483b270afdb747cca5a99ed4fa4fbd5c3b30a704a7aaf863fd1a33838bc: Status 404 returned error can't find the container with id b9a95483b270afdb747cca5a99ed4fa4fbd5c3b30a704a7aaf863fd1a33838bc Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.265865 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5"] Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.863888 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" event={"ID":"8bb695bf-11e7-478a-a348-2a06ef0bcdaf","Type":"ContainerStarted","Data":"b9a95483b270afdb747cca5a99ed4fa4fbd5c3b30a704a7aaf863fd1a33838bc"} Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.866911 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"30fa58bc-46e3-40c4-ad73-3f2e1f8341dd","Type":"ContainerStarted","Data":"07564e6c8a64486af4bbc8341ae1360eda8bdd8fe874bb3b95c009ccb37dc23a"} Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.867818 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.870124 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57552917-a09b-4f52-96b5-c7749b9af779","Type":"ContainerStarted","Data":"1e70192cf3ccc58a230dbda475aaff55951398ae840eecef2a6a29df4524bc0f"} Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.870776 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.938507 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.938467599 podStartE2EDuration="36.938467599s" podCreationTimestamp="2026-01-22 09:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:32:32.905657658 +0000 UTC m=+1322.749736721" watchObservedRunningTime="2026-01-22 09:32:32.938467599 +0000 UTC m=+1322.782546672" Jan 22 09:32:32 crc kubenswrapper[4892]: I0122 09:32:32.950220 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.950194765 podStartE2EDuration="36.950194765s" podCreationTimestamp="2026-01-22 09:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:32:32.931183921 +0000 UTC m=+1322.775263004" watchObservedRunningTime="2026-01-22 09:32:32.950194765 +0000 UTC m=+1322.794273848" Jan 22 09:32:35 crc kubenswrapper[4892]: I0122 09:32:35.279893 4892 scope.go:117] "RemoveContainer" containerID="2694520b0f94a82ec6056b2dc04949ec5944d6743a4823335620803314210461" Jan 22 09:32:42 crc kubenswrapper[4892]: I0122 09:32:42.985513 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" event={"ID":"8bb695bf-11e7-478a-a348-2a06ef0bcdaf","Type":"ContainerStarted","Data":"491f9beab76cc8f932f259e55937da0852014397bf1d93a196189b2c0dbad452"} Jan 22 09:32:43 crc kubenswrapper[4892]: I0122 09:32:43.018404 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" podStartSLOduration=2.298658395 podStartE2EDuration="12.018375679s" podCreationTimestamp="2026-01-22 09:32:31 +0000 UTC" firstStartedPulling="2026-01-22 09:32:32.267781542 +0000 UTC m=+1322.111860605" lastFinishedPulling="2026-01-22 09:32:41.987498826 +0000 UTC m=+1331.831577889" observedRunningTime="2026-01-22 09:32:43.008973299 +0000 UTC m=+1332.853052402" watchObservedRunningTime="2026-01-22 09:32:43.018375679 +0000 UTC m=+1332.862454772" Jan 22 09:32:46 crc kubenswrapper[4892]: I0122 09:32:46.958618 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:32:46 crc kubenswrapper[4892]: I0122 09:32:46.967583 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 09:32:55 crc kubenswrapper[4892]: I0122 09:32:55.152611 4892 generic.go:334] "Generic (PLEG): container finished" podID="8bb695bf-11e7-478a-a348-2a06ef0bcdaf" containerID="491f9beab76cc8f932f259e55937da0852014397bf1d93a196189b2c0dbad452" exitCode=0 Jan 22 09:32:55 crc kubenswrapper[4892]: I0122 09:32:55.152704 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" event={"ID":"8bb695bf-11e7-478a-a348-2a06ef0bcdaf","Type":"ContainerDied","Data":"491f9beab76cc8f932f259e55937da0852014397bf1d93a196189b2c0dbad452"} Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.547321 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.699325 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-repo-setup-combined-ca-bundle\") pod \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.699410 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-ssh-key-openstack-edpm-ipam\") pod \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.699452 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-inventory\") pod \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.699556 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krrfl\" (UniqueName: \"kubernetes.io/projected/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-kube-api-access-krrfl\") pod \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\" (UID: \"8bb695bf-11e7-478a-a348-2a06ef0bcdaf\") " Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.705612 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-kube-api-access-krrfl" (OuterVolumeSpecName: "kube-api-access-krrfl") pod "8bb695bf-11e7-478a-a348-2a06ef0bcdaf" (UID: "8bb695bf-11e7-478a-a348-2a06ef0bcdaf"). InnerVolumeSpecName "kube-api-access-krrfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.711983 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8bb695bf-11e7-478a-a348-2a06ef0bcdaf" (UID: "8bb695bf-11e7-478a-a348-2a06ef0bcdaf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.727194 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8bb695bf-11e7-478a-a348-2a06ef0bcdaf" (UID: "8bb695bf-11e7-478a-a348-2a06ef0bcdaf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.730394 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-inventory" (OuterVolumeSpecName: "inventory") pod "8bb695bf-11e7-478a-a348-2a06ef0bcdaf" (UID: "8bb695bf-11e7-478a-a348-2a06ef0bcdaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.801831 4892 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.801866 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.801880 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:56 crc kubenswrapper[4892]: I0122 09:32:56.801892 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krrfl\" (UniqueName: \"kubernetes.io/projected/8bb695bf-11e7-478a-a348-2a06ef0bcdaf-kube-api-access-krrfl\") on node \"crc\" DevicePath \"\"" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.173481 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" event={"ID":"8bb695bf-11e7-478a-a348-2a06ef0bcdaf","Type":"ContainerDied","Data":"b9a95483b270afdb747cca5a99ed4fa4fbd5c3b30a704a7aaf863fd1a33838bc"} Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.173520 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a95483b270afdb747cca5a99ed4fa4fbd5c3b30a704a7aaf863fd1a33838bc" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.173537 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.246254 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm"] Jan 22 09:32:57 crc kubenswrapper[4892]: E0122 09:32:57.246809 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb695bf-11e7-478a-a348-2a06ef0bcdaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.246835 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb695bf-11e7-478a-a348-2a06ef0bcdaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.247085 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb695bf-11e7-478a-a348-2a06ef0bcdaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.248139 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.251232 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.251680 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.251739 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.253314 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.274523 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm"] Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.413331 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bbr\" (UniqueName: \"kubernetes.io/projected/4316ad67-9810-4253-bcfb-faa1b9936429-kube-api-access-h8bbr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.413723 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.413761 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.516139 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.516222 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.516345 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8bbr\" (UniqueName: \"kubernetes.io/projected/4316ad67-9810-4253-bcfb-faa1b9936429-kube-api-access-h8bbr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.520688 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.521251 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.537862 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8bbr\" (UniqueName: \"kubernetes.io/projected/4316ad67-9810-4253-bcfb-faa1b9936429-kube-api-access-h8bbr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-48xvm\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:57 crc kubenswrapper[4892]: I0122 09:32:57.575930 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:32:58 crc kubenswrapper[4892]: I0122 09:32:58.099929 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm"] Jan 22 09:32:58 crc kubenswrapper[4892]: I0122 09:32:58.184512 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" event={"ID":"4316ad67-9810-4253-bcfb-faa1b9936429","Type":"ContainerStarted","Data":"799b8592cf97ec8636cae85e98ab3ed0b318ed54d06bb2d2d25f794cf20fe89b"} Jan 22 09:32:59 crc kubenswrapper[4892]: I0122 09:32:59.194807 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" event={"ID":"4316ad67-9810-4253-bcfb-faa1b9936429","Type":"ContainerStarted","Data":"f4c32409639c05a46bd4c066f8045a813f2e456fe6ed6ba6bba1761ddcd3357b"} Jan 22 09:32:59 crc kubenswrapper[4892]: I0122 09:32:59.218688 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" podStartSLOduration=1.7803202900000001 podStartE2EDuration="2.218665513s" podCreationTimestamp="2026-01-22 09:32:57 +0000 UTC" firstStartedPulling="2026-01-22 09:32:58.099456815 +0000 UTC m=+1347.943535878" lastFinishedPulling="2026-01-22 09:32:58.537802038 +0000 UTC m=+1348.381881101" observedRunningTime="2026-01-22 09:32:59.209227222 +0000 UTC m=+1349.053306295" watchObservedRunningTime="2026-01-22 09:32:59.218665513 +0000 UTC m=+1349.062744576" Jan 22 09:33:01 crc kubenswrapper[4892]: I0122 09:33:01.216169 4892 generic.go:334] "Generic (PLEG): container finished" podID="4316ad67-9810-4253-bcfb-faa1b9936429" containerID="f4c32409639c05a46bd4c066f8045a813f2e456fe6ed6ba6bba1761ddcd3357b" exitCode=0 Jan 22 09:33:01 crc kubenswrapper[4892]: I0122 09:33:01.216763 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" event={"ID":"4316ad67-9810-4253-bcfb-faa1b9936429","Type":"ContainerDied","Data":"f4c32409639c05a46bd4c066f8045a813f2e456fe6ed6ba6bba1761ddcd3357b"} Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.633386 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.816078 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-inventory\") pod \"4316ad67-9810-4253-bcfb-faa1b9936429\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.816172 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8bbr\" (UniqueName: \"kubernetes.io/projected/4316ad67-9810-4253-bcfb-faa1b9936429-kube-api-access-h8bbr\") pod \"4316ad67-9810-4253-bcfb-faa1b9936429\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.816278 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-ssh-key-openstack-edpm-ipam\") pod \"4316ad67-9810-4253-bcfb-faa1b9936429\" (UID: \"4316ad67-9810-4253-bcfb-faa1b9936429\") " Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.823540 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4316ad67-9810-4253-bcfb-faa1b9936429-kube-api-access-h8bbr" (OuterVolumeSpecName: "kube-api-access-h8bbr") pod "4316ad67-9810-4253-bcfb-faa1b9936429" (UID: "4316ad67-9810-4253-bcfb-faa1b9936429"). InnerVolumeSpecName "kube-api-access-h8bbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.848352 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-inventory" (OuterVolumeSpecName: "inventory") pod "4316ad67-9810-4253-bcfb-faa1b9936429" (UID: "4316ad67-9810-4253-bcfb-faa1b9936429"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.855850 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4316ad67-9810-4253-bcfb-faa1b9936429" (UID: "4316ad67-9810-4253-bcfb-faa1b9936429"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.919159 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8bbr\" (UniqueName: \"kubernetes.io/projected/4316ad67-9810-4253-bcfb-faa1b9936429-kube-api-access-h8bbr\") on node \"crc\" DevicePath \"\"" Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.919460 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:33:02 crc kubenswrapper[4892]: I0122 09:33:02.919473 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4316ad67-9810-4253-bcfb-faa1b9936429-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.233774 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" event={"ID":"4316ad67-9810-4253-bcfb-faa1b9936429","Type":"ContainerDied","Data":"799b8592cf97ec8636cae85e98ab3ed0b318ed54d06bb2d2d25f794cf20fe89b"} Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.233837 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-48xvm" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.233893 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="799b8592cf97ec8636cae85e98ab3ed0b318ed54d06bb2d2d25f794cf20fe89b" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.332003 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk"] Jan 22 09:33:03 crc kubenswrapper[4892]: E0122 09:33:03.332581 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4316ad67-9810-4253-bcfb-faa1b9936429" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.332605 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="4316ad67-9810-4253-bcfb-faa1b9936429" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.332822 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="4316ad67-9810-4253-bcfb-faa1b9936429" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.333477 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.335519 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.335593 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.336126 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.336595 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.350849 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk"] Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.429813 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9kj\" (UniqueName: \"kubernetes.io/projected/3ca49e96-a4fc-4e54-bb55-b32d42d72734-kube-api-access-hm9kj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.429861 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.429906 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.429942 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.531821 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9kj\" (UniqueName: \"kubernetes.io/projected/3ca49e96-a4fc-4e54-bb55-b32d42d72734-kube-api-access-hm9kj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.531869 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.531918 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.531944 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.536435 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.538459 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.538751 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.548729 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9kj\" (UniqueName: \"kubernetes.io/projected/3ca49e96-a4fc-4e54-bb55-b32d42d72734-kube-api-access-hm9kj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:03 crc kubenswrapper[4892]: I0122 09:33:03.657970 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:33:04 crc kubenswrapper[4892]: I0122 09:33:04.167168 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk"] Jan 22 09:33:04 crc kubenswrapper[4892]: I0122 09:33:04.244546 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" event={"ID":"3ca49e96-a4fc-4e54-bb55-b32d42d72734","Type":"ContainerStarted","Data":"1964fb45a5117572caea551f91f1e317d30f6abeab7cd5da3c035e1167efa4e8"} Jan 22 09:33:05 crc kubenswrapper[4892]: I0122 09:33:05.256236 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" event={"ID":"3ca49e96-a4fc-4e54-bb55-b32d42d72734","Type":"ContainerStarted","Data":"5ff6dc4880460abd2a80faeab6687c655ffedc4a52d126a82c01df74d04a5d09"} Jan 22 09:33:05 crc kubenswrapper[4892]: I0122 09:33:05.271544 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" podStartSLOduration=1.813960819 podStartE2EDuration="2.271522192s" podCreationTimestamp="2026-01-22 09:33:03 +0000 UTC" firstStartedPulling="2026-01-22 09:33:04.178136893 +0000 UTC m=+1354.022215956" lastFinishedPulling="2026-01-22 09:33:04.635698266 +0000 UTC m=+1354.479777329" observedRunningTime="2026-01-22 09:33:05.270144928 +0000 UTC m=+1355.114224011" watchObservedRunningTime="2026-01-22 09:33:05.271522192 +0000 UTC m=+1355.115601265" Jan 22 09:33:16 crc kubenswrapper[4892]: I0122 09:33:16.323731 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:33:16 crc kubenswrapper[4892]: I0122 09:33:16.324376 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.399857 4892 scope.go:117] "RemoveContainer" containerID="761cbc2ad31d8c772853deace9c46eb9472b5e14da71aff569f880d3995af45e" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.425816 4892 scope.go:117] "RemoveContainer" containerID="1802c480527504a240f7ff8eec03d66747e8800343ff31f84f31e19dd58c9f50" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.448925 4892 scope.go:117] "RemoveContainer" containerID="de49466765ecf204df91f01d6efb4f0058ea655787bbe0e721d5dc73a57d6c53" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.469856 4892 scope.go:117] "RemoveContainer" containerID="6e0685451da732e1f83de3cb22ff107893159d0811e0d9ff91fbdf035a796499" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.522817 4892 scope.go:117] "RemoveContainer" containerID="7ab0c122ca045b4851d2361c00985396f6a69250f33605485d75150d6de26677" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.574536 4892 scope.go:117] "RemoveContainer" containerID="9e7e5c6b61a197138f10c0535d765b391add3e2e041ba0cdac992275d295244f" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.606151 4892 scope.go:117] "RemoveContainer" containerID="0c9dc88d48c4e57d605232a03b5b6b6b43be710e3bd9ee7e4f054d6d3224280d" Jan 22 09:33:35 crc kubenswrapper[4892]: I0122 09:33:35.653944 4892 scope.go:117] "RemoveContainer" containerID="17e66fb14d1b6ec93a3c7e2c1fce3db7e8a71d586e2f3b5a98c2d01b7d580d39" Jan 22 09:33:46 crc kubenswrapper[4892]: I0122 09:33:46.323071 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:33:46 crc kubenswrapper[4892]: I0122 09:33:46.323696 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:34:16 crc kubenswrapper[4892]: I0122 09:34:16.324346 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:34:16 crc kubenswrapper[4892]: I0122 09:34:16.325015 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:34:16 crc kubenswrapper[4892]: I0122 09:34:16.325060 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:34:16 crc kubenswrapper[4892]: I0122 09:34:16.325853 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78a6433c45938fca7e3f01a04a252af5d76315e063c7dbe1b1f8aa3b3903b18b"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:34:16 crc kubenswrapper[4892]: I0122 09:34:16.325902 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://78a6433c45938fca7e3f01a04a252af5d76315e063c7dbe1b1f8aa3b3903b18b" gracePeriod=600 Jan 22 09:34:18 crc kubenswrapper[4892]: I0122 09:34:18.024087 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="78a6433c45938fca7e3f01a04a252af5d76315e063c7dbe1b1f8aa3b3903b18b" exitCode=0 Jan 22 09:34:18 crc kubenswrapper[4892]: I0122 09:34:18.024189 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"78a6433c45938fca7e3f01a04a252af5d76315e063c7dbe1b1f8aa3b3903b18b"} Jan 22 09:34:18 crc kubenswrapper[4892]: I0122 09:34:18.024853 4892 scope.go:117] "RemoveContainer" containerID="31a997f31663709d14ae5efb219a31b8ac9b066d6e93055a348ee5203f0f3774" Jan 22 09:34:19 crc kubenswrapper[4892]: I0122 09:34:19.034961 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922"} Jan 22 09:34:35 crc kubenswrapper[4892]: I0122 09:34:35.778986 4892 scope.go:117] "RemoveContainer" containerID="7b33f0d6ad40d93289274e89f9df00e01fc4edd767d890cfd75327ef7f0ca2be" Jan 22 09:34:35 crc kubenswrapper[4892]: I0122 09:34:35.824243 4892 scope.go:117] "RemoveContainer" containerID="c1bb077fb24f04ff6d76182e7a41e5a728e7a5d3ff35d9285ae04d236c2e6c11" Jan 22 09:34:35 crc kubenswrapper[4892]: I0122 09:34:35.859594 4892 scope.go:117] "RemoveContainer" containerID="1f8239d4a424c7f45f80e365a1cae98bb144e64b1922913068ff471b1a50e8fa" Jan 22 09:35:32 crc kubenswrapper[4892]: I0122 09:35:32.093465 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-674547b56f-gvjxm" podUID="accdf866-14d0-4308-a8d7-c598fde46122" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 22 09:35:35 crc kubenswrapper[4892]: I0122 09:35:35.958236 4892 scope.go:117] "RemoveContainer" containerID="6a855fe173290ab7bc40bdfdeb4813df9e4c4f9d8ae704973ee2706ddf2c78a8" Jan 22 09:35:35 crc kubenswrapper[4892]: I0122 09:35:35.983676 4892 scope.go:117] "RemoveContainer" containerID="930e3285fd5ea20f821d740427051e575da4a0f88357f591ccb91867e1f48efd" Jan 22 09:35:35 crc kubenswrapper[4892]: I0122 09:35:35.999302 4892 scope.go:117] "RemoveContainer" containerID="44858298fbb514f65a32b615af3c0284132892fe56df411d8b445a419753819d" Jan 22 09:35:36 crc kubenswrapper[4892]: I0122 09:35:36.023858 4892 scope.go:117] "RemoveContainer" containerID="451ed154220b8d6db1463825dd45065f5c4a9dc2fb4292fb20dd8f7405357f0f" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.231865 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p9xpg"] Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.234246 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.261364 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p9xpg"] Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.359075 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-catalog-content\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.359596 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-utilities\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.359658 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zglc\" (UniqueName: \"kubernetes.io/projected/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-kube-api-access-9zglc\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.461673 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-utilities\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.461732 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zglc\" (UniqueName: \"kubernetes.io/projected/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-kube-api-access-9zglc\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.461774 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-catalog-content\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.463066 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-utilities\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.463074 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-catalog-content\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.490995 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zglc\" (UniqueName: \"kubernetes.io/projected/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-kube-api-access-9zglc\") pod \"redhat-operators-p9xpg\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:17 crc kubenswrapper[4892]: I0122 09:36:17.563614 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:18 crc kubenswrapper[4892]: I0122 09:36:18.069475 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p9xpg"] Jan 22 09:36:18 crc kubenswrapper[4892]: I0122 09:36:18.158030 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerStarted","Data":"0e66860db935b2999c64ff4744974ae7ffbeed08fb653ef72d719d839192b002"} Jan 22 09:36:19 crc kubenswrapper[4892]: I0122 09:36:19.169478 4892 generic.go:334] "Generic (PLEG): container finished" podID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerID="7e96f834c5f643106271e10e8c69abbce965123daac5f2c46830eb37407cfe0a" exitCode=0 Jan 22 09:36:19 crc kubenswrapper[4892]: I0122 09:36:19.169575 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerDied","Data":"7e96f834c5f643106271e10e8c69abbce965123daac5f2c46830eb37407cfe0a"} Jan 22 09:36:19 crc kubenswrapper[4892]: I0122 09:36:19.172945 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:36:24 crc kubenswrapper[4892]: I0122 09:36:24.211004 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerStarted","Data":"ae31cdc12249b01742b97559f91b34ad1f81573104a48cd7bff3d5bc6eb4f45e"} Jan 22 09:36:31 crc kubenswrapper[4892]: I0122 09:36:31.278765 4892 generic.go:334] "Generic (PLEG): container finished" podID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerID="ae31cdc12249b01742b97559f91b34ad1f81573104a48cd7bff3d5bc6eb4f45e" exitCode=0 Jan 22 09:36:31 crc kubenswrapper[4892]: I0122 09:36:31.278844 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerDied","Data":"ae31cdc12249b01742b97559f91b34ad1f81573104a48cd7bff3d5bc6eb4f45e"} Jan 22 09:36:35 crc kubenswrapper[4892]: I0122 09:36:35.314133 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerStarted","Data":"177a817fceba7a8dc948e6a153e3b322574f25a3d80847c7fcd3e743002b2f72"} Jan 22 09:36:35 crc kubenswrapper[4892]: I0122 09:36:35.338559 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p9xpg" podStartSLOduration=3.130008552 podStartE2EDuration="18.338536418s" podCreationTimestamp="2026-01-22 09:36:17 +0000 UTC" firstStartedPulling="2026-01-22 09:36:19.172700515 +0000 UTC m=+1549.016779578" lastFinishedPulling="2026-01-22 09:36:34.381228381 +0000 UTC m=+1564.225307444" observedRunningTime="2026-01-22 09:36:35.332150331 +0000 UTC m=+1565.176229404" watchObservedRunningTime="2026-01-22 09:36:35.338536418 +0000 UTC m=+1565.182615481" Jan 22 09:36:36 crc kubenswrapper[4892]: I0122 09:36:36.068679 4892 scope.go:117] "RemoveContainer" containerID="a60550b361096232dfeb609bd3307d456cd741165d7085bd2bb46c14d60de89f" Jan 22 09:36:36 crc kubenswrapper[4892]: I0122 09:36:36.096702 4892 scope.go:117] "RemoveContainer" containerID="10a66b212df198e0f5926a20f8fd18efd6d09099bad778a4d4347c197076cf10" Jan 22 09:36:37 crc kubenswrapper[4892]: I0122 09:36:37.564250 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:37 crc kubenswrapper[4892]: I0122 09:36:37.564623 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:38 crc kubenswrapper[4892]: I0122 09:36:38.628108 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p9xpg" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="registry-server" probeResult="failure" output=< Jan 22 09:36:38 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:36:38 crc kubenswrapper[4892]: > Jan 22 09:36:43 crc kubenswrapper[4892]: I0122 09:36:43.866162 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6qq5"] Jan 22 09:36:43 crc kubenswrapper[4892]: I0122 09:36:43.874014 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:43 crc kubenswrapper[4892]: I0122 09:36:43.912814 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6qq5"] Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.014471 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vwr\" (UniqueName: \"kubernetes.io/projected/902275cf-1068-4c6f-90e4-01f11e3a89fb-kube-api-access-q5vwr\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.014594 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-catalog-content\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.014685 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-utilities\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.117156 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vwr\" (UniqueName: \"kubernetes.io/projected/902275cf-1068-4c6f-90e4-01f11e3a89fb-kube-api-access-q5vwr\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.117263 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-catalog-content\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.117368 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-utilities\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.117848 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-utilities\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.117868 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-catalog-content\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.140153 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vwr\" (UniqueName: \"kubernetes.io/projected/902275cf-1068-4c6f-90e4-01f11e3a89fb-kube-api-access-q5vwr\") pod \"certified-operators-b6qq5\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.225447 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:36:44 crc kubenswrapper[4892]: I0122 09:36:44.755852 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6qq5"] Jan 22 09:36:44 crc kubenswrapper[4892]: W0122 09:36:44.760332 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod902275cf_1068_4c6f_90e4_01f11e3a89fb.slice/crio-ecdcb2d9a559d284969da1cd307854cc9429423277045ecd0fdb046b6d75e468 WatchSource:0}: Error finding container ecdcb2d9a559d284969da1cd307854cc9429423277045ecd0fdb046b6d75e468: Status 404 returned error can't find the container with id ecdcb2d9a559d284969da1cd307854cc9429423277045ecd0fdb046b6d75e468 Jan 22 09:36:45 crc kubenswrapper[4892]: I0122 09:36:45.417421 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qq5" event={"ID":"902275cf-1068-4c6f-90e4-01f11e3a89fb","Type":"ContainerStarted","Data":"ecdcb2d9a559d284969da1cd307854cc9429423277045ecd0fdb046b6d75e468"} Jan 22 09:36:46 crc kubenswrapper[4892]: I0122 09:36:46.323972 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:36:46 crc kubenswrapper[4892]: I0122 09:36:46.324429 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:36:46 crc kubenswrapper[4892]: I0122 09:36:46.427363 4892 generic.go:334] "Generic (PLEG): container finished" podID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerID="2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3" exitCode=0 Jan 22 09:36:46 crc kubenswrapper[4892]: I0122 09:36:46.427408 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qq5" event={"ID":"902275cf-1068-4c6f-90e4-01f11e3a89fb","Type":"ContainerDied","Data":"2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3"} Jan 22 09:36:47 crc kubenswrapper[4892]: I0122 09:36:47.612009 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:47 crc kubenswrapper[4892]: I0122 09:36:47.655832 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:49 crc kubenswrapper[4892]: I0122 09:36:49.248354 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p9xpg"] Jan 22 09:36:49 crc kubenswrapper[4892]: I0122 09:36:49.458114 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p9xpg" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="registry-server" containerID="cri-o://177a817fceba7a8dc948e6a153e3b322574f25a3d80847c7fcd3e743002b2f72" gracePeriod=2 Jan 22 09:36:54 crc kubenswrapper[4892]: I0122 09:36:54.503608 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p9xpg_14a6f426-6cd4-450f-9dce-0a5803a5f3e7/registry-server/0.log" Jan 22 09:36:54 crc kubenswrapper[4892]: I0122 09:36:54.506381 4892 generic.go:334] "Generic (PLEG): container finished" podID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerID="177a817fceba7a8dc948e6a153e3b322574f25a3d80847c7fcd3e743002b2f72" exitCode=137 Jan 22 09:36:54 crc kubenswrapper[4892]: I0122 09:36:54.506451 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerDied","Data":"177a817fceba7a8dc948e6a153e3b322574f25a3d80847c7fcd3e743002b2f72"} Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.492607 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p9xpg_14a6f426-6cd4-450f-9dce-0a5803a5f3e7/registry-server/0.log" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.493797 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.522862 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p9xpg_14a6f426-6cd4-450f-9dce-0a5803a5f3e7/registry-server/0.log" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.524689 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9xpg" event={"ID":"14a6f426-6cd4-450f-9dce-0a5803a5f3e7","Type":"ContainerDied","Data":"0e66860db935b2999c64ff4744974ae7ffbeed08fb653ef72d719d839192b002"} Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.524760 4892 scope.go:117] "RemoveContainer" containerID="177a817fceba7a8dc948e6a153e3b322574f25a3d80847c7fcd3e743002b2f72" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.524964 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9xpg" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.553119 4892 scope.go:117] "RemoveContainer" containerID="ae31cdc12249b01742b97559f91b34ad1f81573104a48cd7bff3d5bc6eb4f45e" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.581671 4892 scope.go:117] "RemoveContainer" containerID="7e96f834c5f643106271e10e8c69abbce965123daac5f2c46830eb37407cfe0a" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.634427 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zglc\" (UniqueName: \"kubernetes.io/projected/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-kube-api-access-9zglc\") pod \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.634520 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-utilities\") pod \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.634676 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-catalog-content\") pod \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\" (UID: \"14a6f426-6cd4-450f-9dce-0a5803a5f3e7\") " Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.635990 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-utilities" (OuterVolumeSpecName: "utilities") pod "14a6f426-6cd4-450f-9dce-0a5803a5f3e7" (UID: "14a6f426-6cd4-450f-9dce-0a5803a5f3e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.641575 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-kube-api-access-9zglc" (OuterVolumeSpecName: "kube-api-access-9zglc") pod "14a6f426-6cd4-450f-9dce-0a5803a5f3e7" (UID: "14a6f426-6cd4-450f-9dce-0a5803a5f3e7"). InnerVolumeSpecName "kube-api-access-9zglc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.739034 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zglc\" (UniqueName: \"kubernetes.io/projected/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-kube-api-access-9zglc\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.739075 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.851427 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14a6f426-6cd4-450f-9dce-0a5803a5f3e7" (UID: "14a6f426-6cd4-450f-9dce-0a5803a5f3e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:36:55 crc kubenswrapper[4892]: I0122 09:36:55.943269 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a6f426-6cd4-450f-9dce-0a5803a5f3e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:56 crc kubenswrapper[4892]: I0122 09:36:56.168369 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p9xpg"] Jan 22 09:36:56 crc kubenswrapper[4892]: I0122 09:36:56.184081 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p9xpg"] Jan 22 09:36:56 crc kubenswrapper[4892]: E0122 09:36:56.309906 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a6f426_6cd4_450f_9dce_0a5803a5f3e7.slice/crio-0e66860db935b2999c64ff4744974ae7ffbeed08fb653ef72d719d839192b002\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a6f426_6cd4_450f_9dce_0a5803a5f3e7.slice\": RecentStats: unable to find data in memory cache]" Jan 22 09:36:57 crc kubenswrapper[4892]: I0122 09:36:57.432002 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" path="/var/lib/kubelet/pods/14a6f426-6cd4-450f-9dce-0a5803a5f3e7/volumes" Jan 22 09:37:02 crc kubenswrapper[4892]: I0122 09:37:02.590668 4892 generic.go:334] "Generic (PLEG): container finished" podID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerID="4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb" exitCode=0 Jan 22 09:37:02 crc kubenswrapper[4892]: I0122 09:37:02.590783 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qq5" event={"ID":"902275cf-1068-4c6f-90e4-01f11e3a89fb","Type":"ContainerDied","Data":"4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb"} Jan 22 09:37:06 crc kubenswrapper[4892]: I0122 09:37:06.627739 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qq5" event={"ID":"902275cf-1068-4c6f-90e4-01f11e3a89fb","Type":"ContainerStarted","Data":"f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f"} Jan 22 09:37:06 crc kubenswrapper[4892]: I0122 09:37:06.660461 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6qq5" podStartSLOduration=6.382904837 podStartE2EDuration="23.660430559s" podCreationTimestamp="2026-01-22 09:36:43 +0000 UTC" firstStartedPulling="2026-01-22 09:36:47.436476774 +0000 UTC m=+1577.280555847" lastFinishedPulling="2026-01-22 09:37:04.714002496 +0000 UTC m=+1594.558081569" observedRunningTime="2026-01-22 09:37:06.650804683 +0000 UTC m=+1596.494883746" watchObservedRunningTime="2026-01-22 09:37:06.660430559 +0000 UTC m=+1596.504509622" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.172070 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wzm4q"] Jan 22 09:37:07 crc kubenswrapper[4892]: E0122 09:37:07.172565 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="registry-server" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.172584 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="registry-server" Jan 22 09:37:07 crc kubenswrapper[4892]: E0122 09:37:07.172612 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="extract-utilities" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.172621 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="extract-utilities" Jan 22 09:37:07 crc kubenswrapper[4892]: E0122 09:37:07.172643 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="extract-content" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.172650 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="extract-content" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.172954 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a6f426-6cd4-450f-9dce-0a5803a5f3e7" containerName="registry-server" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.174511 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.183458 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzm4q"] Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.275064 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-catalog-content\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.275802 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbxc\" (UniqueName: \"kubernetes.io/projected/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-kube-api-access-nnbxc\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.276003 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-utilities\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.377782 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-catalog-content\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.377831 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnbxc\" (UniqueName: \"kubernetes.io/projected/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-kube-api-access-nnbxc\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.377889 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-utilities\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.378394 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-utilities\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.378599 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-catalog-content\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.397535 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnbxc\" (UniqueName: \"kubernetes.io/projected/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-kube-api-access-nnbxc\") pod \"redhat-marketplace-wzm4q\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:07 crc kubenswrapper[4892]: I0122 09:37:07.499705 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.006552 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzm4q"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.061465 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cg57h"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.105406 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a427-account-create-update-xncm7"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.126698 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cg57h"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.140769 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-96xfx"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.150051 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a653-account-create-update-ggqvv"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.170417 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a427-account-create-update-xncm7"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.178086 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bdn46"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.185577 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-96xfx"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.193442 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a653-account-create-update-ggqvv"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.203865 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bdn46"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.214730 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d54a-account-create-update-zv67x"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.225202 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d54a-account-create-update-zv67x"] Jan 22 09:37:08 crc kubenswrapper[4892]: I0122 09:37:08.649161 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzm4q" event={"ID":"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8","Type":"ContainerStarted","Data":"6087777ee3652cf4fbe7a6348798de7e6c29fdecab9495bf7ca92b95ca46d88a"} Jan 22 09:37:09 crc kubenswrapper[4892]: I0122 09:37:09.433916 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08202c5a-0fd3-454b-b3e8-fe19c0abfb1d" path="/var/lib/kubelet/pods/08202c5a-0fd3-454b-b3e8-fe19c0abfb1d/volumes" Jan 22 09:37:09 crc kubenswrapper[4892]: I0122 09:37:09.434774 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1deb2fb8-da6b-4557-bac8-48ad0bc42e52" path="/var/lib/kubelet/pods/1deb2fb8-da6b-4557-bac8-48ad0bc42e52/volumes" Jan 22 09:37:09 crc kubenswrapper[4892]: I0122 09:37:09.435348 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1" path="/var/lib/kubelet/pods/44bfbab6-2053-4a28-a9e3-01c8bf6e8cc1/volumes" Jan 22 09:37:09 crc kubenswrapper[4892]: I0122 09:37:09.436069 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8626f341-0124-473c-a852-58b5c4e24c0a" path="/var/lib/kubelet/pods/8626f341-0124-473c-a852-58b5c4e24c0a/volumes" Jan 22 09:37:09 crc kubenswrapper[4892]: I0122 09:37:09.437582 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3562a65-6cdb-43f8-972c-32c454f33a14" path="/var/lib/kubelet/pods/b3562a65-6cdb-43f8-972c-32c454f33a14/volumes" Jan 22 09:37:09 crc kubenswrapper[4892]: I0122 09:37:09.438086 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e479527f-d4fd-4c96-ab6f-8c205525df26" path="/var/lib/kubelet/pods/e479527f-d4fd-4c96-ab6f-8c205525df26/volumes" Jan 22 09:37:12 crc kubenswrapper[4892]: I0122 09:37:12.684377 4892 generic.go:334] "Generic (PLEG): container finished" podID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerID="185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e" exitCode=0 Jan 22 09:37:12 crc kubenswrapper[4892]: I0122 09:37:12.684422 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzm4q" event={"ID":"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8","Type":"ContainerDied","Data":"185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e"} Jan 22 09:37:14 crc kubenswrapper[4892]: I0122 09:37:14.225790 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:37:14 crc kubenswrapper[4892]: I0122 09:37:14.226390 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:37:14 crc kubenswrapper[4892]: I0122 09:37:14.278718 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:37:14 crc kubenswrapper[4892]: I0122 09:37:14.755693 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:37:15 crc kubenswrapper[4892]: I0122 09:37:15.069124 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6qq5"] Jan 22 09:37:15 crc kubenswrapper[4892]: I0122 09:37:15.713179 4892 generic.go:334] "Generic (PLEG): container finished" podID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerID="c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183" exitCode=0 Jan 22 09:37:15 crc kubenswrapper[4892]: I0122 09:37:15.713262 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzm4q" event={"ID":"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8","Type":"ContainerDied","Data":"c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183"} Jan 22 09:37:16 crc kubenswrapper[4892]: I0122 09:37:16.323969 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:37:16 crc kubenswrapper[4892]: I0122 09:37:16.324591 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:37:16 crc kubenswrapper[4892]: I0122 09:37:16.723802 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6qq5" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="registry-server" containerID="cri-o://f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f" gracePeriod=2 Jan 22 09:37:16 crc kubenswrapper[4892]: I0122 09:37:16.724825 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzm4q" event={"ID":"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8","Type":"ContainerStarted","Data":"ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d"} Jan 22 09:37:16 crc kubenswrapper[4892]: I0122 09:37:16.748479 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wzm4q" podStartSLOduration=6.320636443 podStartE2EDuration="9.748462496s" podCreationTimestamp="2026-01-22 09:37:07 +0000 UTC" firstStartedPulling="2026-01-22 09:37:12.686136686 +0000 UTC m=+1602.530215749" lastFinishedPulling="2026-01-22 09:37:16.113962739 +0000 UTC m=+1605.958041802" observedRunningTime="2026-01-22 09:37:16.745884963 +0000 UTC m=+1606.589964026" watchObservedRunningTime="2026-01-22 09:37:16.748462496 +0000 UTC m=+1606.592541559" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.232735 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.373555 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-utilities\") pod \"902275cf-1068-4c6f-90e4-01f11e3a89fb\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.373748 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-catalog-content\") pod \"902275cf-1068-4c6f-90e4-01f11e3a89fb\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.374156 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vwr\" (UniqueName: \"kubernetes.io/projected/902275cf-1068-4c6f-90e4-01f11e3a89fb-kube-api-access-q5vwr\") pod \"902275cf-1068-4c6f-90e4-01f11e3a89fb\" (UID: \"902275cf-1068-4c6f-90e4-01f11e3a89fb\") " Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.375218 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-utilities" (OuterVolumeSpecName: "utilities") pod "902275cf-1068-4c6f-90e4-01f11e3a89fb" (UID: "902275cf-1068-4c6f-90e4-01f11e3a89fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.379831 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902275cf-1068-4c6f-90e4-01f11e3a89fb-kube-api-access-q5vwr" (OuterVolumeSpecName: "kube-api-access-q5vwr") pod "902275cf-1068-4c6f-90e4-01f11e3a89fb" (UID: "902275cf-1068-4c6f-90e4-01f11e3a89fb"). InnerVolumeSpecName "kube-api-access-q5vwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.429139 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "902275cf-1068-4c6f-90e4-01f11e3a89fb" (UID: "902275cf-1068-4c6f-90e4-01f11e3a89fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.476662 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vwr\" (UniqueName: \"kubernetes.io/projected/902275cf-1068-4c6f-90e4-01f11e3a89fb-kube-api-access-q5vwr\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.476703 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.476712 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902275cf-1068-4c6f-90e4-01f11e3a89fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.500250 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.500309 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.739046 4892 generic.go:334] "Generic (PLEG): container finished" podID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerID="f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f" exitCode=0 Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.739210 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qq5" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.739333 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qq5" event={"ID":"902275cf-1068-4c6f-90e4-01f11e3a89fb","Type":"ContainerDied","Data":"f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f"} Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.739388 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qq5" event={"ID":"902275cf-1068-4c6f-90e4-01f11e3a89fb","Type":"ContainerDied","Data":"ecdcb2d9a559d284969da1cd307854cc9429423277045ecd0fdb046b6d75e468"} Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.739407 4892 scope.go:117] "RemoveContainer" containerID="f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.766903 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6qq5"] Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.768220 4892 scope.go:117] "RemoveContainer" containerID="4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.775499 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6qq5"] Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.812697 4892 scope.go:117] "RemoveContainer" containerID="2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.853522 4892 scope.go:117] "RemoveContainer" containerID="f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f" Jan 22 09:37:17 crc kubenswrapper[4892]: E0122 09:37:17.854130 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f\": container with ID starting with f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f not found: ID does not exist" containerID="f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.854208 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f"} err="failed to get container status \"f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f\": rpc error: code = NotFound desc = could not find container \"f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f\": container with ID starting with f93c05c81bb7a02c96a37227d52cf1e5598504dd1e6329ea304708fbe001136f not found: ID does not exist" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.854257 4892 scope.go:117] "RemoveContainer" containerID="4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb" Jan 22 09:37:17 crc kubenswrapper[4892]: E0122 09:37:17.854810 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb\": container with ID starting with 4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb not found: ID does not exist" containerID="4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.854857 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb"} err="failed to get container status \"4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb\": rpc error: code = NotFound desc = could not find container \"4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb\": container with ID starting with 4219749305fa3ca1b6d42dcebde8adb0cd4e336f40bcab0bcd9f117836235efb not found: ID does not exist" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.854896 4892 scope.go:117] "RemoveContainer" containerID="2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3" Jan 22 09:37:17 crc kubenswrapper[4892]: E0122 09:37:17.855301 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3\": container with ID starting with 2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3 not found: ID does not exist" containerID="2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3" Jan 22 09:37:17 crc kubenswrapper[4892]: I0122 09:37:17.855403 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3"} err="failed to get container status \"2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3\": rpc error: code = NotFound desc = could not find container \"2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3\": container with ID starting with 2222c9b0397ffbdf5c003e0add0951793907f3e745e5a706b00ed00e0e8e1cb3 not found: ID does not exist" Jan 22 09:37:18 crc kubenswrapper[4892]: I0122 09:37:18.554609 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wzm4q" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="registry-server" probeResult="failure" output=< Jan 22 09:37:18 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:37:18 crc kubenswrapper[4892]: > Jan 22 09:37:19 crc kubenswrapper[4892]: I0122 09:37:19.463760 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" path="/var/lib/kubelet/pods/902275cf-1068-4c6f-90e4-01f11e3a89fb/volumes" Jan 22 09:37:26 crc kubenswrapper[4892]: I0122 09:37:26.036253 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lgcpd"] Jan 22 09:37:26 crc kubenswrapper[4892]: I0122 09:37:26.046250 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lgcpd"] Jan 22 09:37:27 crc kubenswrapper[4892]: I0122 09:37:27.428805 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46508830-6103-4f2f-b82d-9ca1fb7ae748" path="/var/lib/kubelet/pods/46508830-6103-4f2f-b82d-9ca1fb7ae748/volumes" Jan 22 09:37:27 crc kubenswrapper[4892]: I0122 09:37:27.555524 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:27 crc kubenswrapper[4892]: I0122 09:37:27.604826 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:27 crc kubenswrapper[4892]: I0122 09:37:27.789825 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzm4q"] Jan 22 09:37:28 crc kubenswrapper[4892]: I0122 09:37:28.833097 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wzm4q" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="registry-server" containerID="cri-o://ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d" gracePeriod=2 Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.244838 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.310500 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnbxc\" (UniqueName: \"kubernetes.io/projected/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-kube-api-access-nnbxc\") pod \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.310699 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-catalog-content\") pod \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.310776 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-utilities\") pod \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\" (UID: \"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8\") " Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.311678 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-utilities" (OuterVolumeSpecName: "utilities") pod "2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" (UID: "2b8b3c89-f210-4494-bd5a-e5f4b641e5f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.321027 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-kube-api-access-nnbxc" (OuterVolumeSpecName: "kube-api-access-nnbxc") pod "2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" (UID: "2b8b3c89-f210-4494-bd5a-e5f4b641e5f8"). InnerVolumeSpecName "kube-api-access-nnbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.339135 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" (UID: "2b8b3c89-f210-4494-bd5a-e5f4b641e5f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.413215 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.413251 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnbxc\" (UniqueName: \"kubernetes.io/projected/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-kube-api-access-nnbxc\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.413265 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.843225 4892 generic.go:334] "Generic (PLEG): container finished" podID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerID="ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d" exitCode=0 Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.843264 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzm4q" event={"ID":"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8","Type":"ContainerDied","Data":"ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d"} Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.843314 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzm4q" event={"ID":"2b8b3c89-f210-4494-bd5a-e5f4b641e5f8","Type":"ContainerDied","Data":"6087777ee3652cf4fbe7a6348798de7e6c29fdecab9495bf7ca92b95ca46d88a"} Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.843331 4892 scope.go:117] "RemoveContainer" containerID="ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.843353 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzm4q" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.865016 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzm4q"] Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.869785 4892 scope.go:117] "RemoveContainer" containerID="c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.874699 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzm4q"] Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.897168 4892 scope.go:117] "RemoveContainer" containerID="185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.937791 4892 scope.go:117] "RemoveContainer" containerID="ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d" Jan 22 09:37:29 crc kubenswrapper[4892]: E0122 09:37:29.938271 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d\": container with ID starting with ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d not found: ID does not exist" containerID="ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.938331 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d"} err="failed to get container status \"ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d\": rpc error: code = NotFound desc = could not find container \"ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d\": container with ID starting with ecbaef72f53e7774ed4a9ade9e6c68a6a29b6e9de7e4ee5c5cca7ec4e6cea88d not found: ID does not exist" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.938356 4892 scope.go:117] "RemoveContainer" containerID="c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183" Jan 22 09:37:29 crc kubenswrapper[4892]: E0122 09:37:29.938765 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183\": container with ID starting with c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183 not found: ID does not exist" containerID="c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.938809 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183"} err="failed to get container status \"c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183\": rpc error: code = NotFound desc = could not find container \"c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183\": container with ID starting with c431c72fbab824e31e16daba555491b443f01312e8b214d79985738ee7368183 not found: ID does not exist" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.938837 4892 scope.go:117] "RemoveContainer" containerID="185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e" Jan 22 09:37:29 crc kubenswrapper[4892]: E0122 09:37:29.939107 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e\": container with ID starting with 185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e not found: ID does not exist" containerID="185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e" Jan 22 09:37:29 crc kubenswrapper[4892]: I0122 09:37:29.939133 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e"} err="failed to get container status \"185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e\": rpc error: code = NotFound desc = could not find container \"185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e\": container with ID starting with 185ad05fcc3797d6c82c1cfd8ac87e47db7684d4d948f4e136f35e90851a374e not found: ID does not exist" Jan 22 09:37:31 crc kubenswrapper[4892]: I0122 09:37:31.432993 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" path="/var/lib/kubelet/pods/2b8b3c89-f210-4494-bd5a-e5f4b641e5f8/volumes" Jan 22 09:37:33 crc kubenswrapper[4892]: I0122 09:37:33.448797 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nhpjt"] Jan 22 09:37:33 crc kubenswrapper[4892]: I0122 09:37:33.458098 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nhpjt"] Jan 22 09:37:35 crc kubenswrapper[4892]: I0122 09:37:35.428775 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c286dd-c33b-453a-abdd-90baff8ff466" path="/var/lib/kubelet/pods/f2c286dd-c33b-453a-abdd-90baff8ff466/volumes" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.157267 4892 scope.go:117] "RemoveContainer" containerID="2008da1a8f75c2437ac912466a1ed4042821a1f7e816d3ca5960958d162ae924" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.199165 4892 scope.go:117] "RemoveContainer" containerID="4c59aaa0f642a96824fa24444ea1c48bee2b075e22415b626adce42e06642d56" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.225081 4892 scope.go:117] "RemoveContainer" containerID="7505806c1d05928ae4c7edfdd63022bca6230d3e12753e8266f34705c97849ce" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.265557 4892 scope.go:117] "RemoveContainer" containerID="c58ad6737c6e9d0e15931d7119af076b7652496f8a9f2552718d2db3884992b4" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.311793 4892 scope.go:117] "RemoveContainer" containerID="ade6274ed31e4c12f5e3b614cb123e1a8eb78d2efa08a847db314714639be0cf" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.354427 4892 scope.go:117] "RemoveContainer" containerID="ebfd97177e96052e5f34b6a1e08132d32cedec6a82b2370f3f674152dad4d597" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.376627 4892 scope.go:117] "RemoveContainer" containerID="7440878a808fa395f4ff86947cd5dfa89555459cc7a51d7a5a6a67094af2d819" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.437096 4892 scope.go:117] "RemoveContainer" containerID="891be5c97fdb5d7505f3036256c3dc1d789fc5311981832d41082b6be9d39af9" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.455634 4892 scope.go:117] "RemoveContainer" containerID="498e6bf79a012bd2f40212aec661feade85d4753213f3fa8b6dadec92e1ca86c" Jan 22 09:37:36 crc kubenswrapper[4892]: I0122 09:37:36.474267 4892 scope.go:117] "RemoveContainer" containerID="c76fa1524bd4bfc81dba90506da04c242fa2fc65976ed9dae44662e12297bcf0" Jan 22 09:37:43 crc kubenswrapper[4892]: I0122 09:37:43.969238 4892 generic.go:334] "Generic (PLEG): container finished" podID="3ca49e96-a4fc-4e54-bb55-b32d42d72734" containerID="5ff6dc4880460abd2a80faeab6687c655ffedc4a52d126a82c01df74d04a5d09" exitCode=0 Jan 22 09:37:43 crc kubenswrapper[4892]: I0122 09:37:43.969341 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" event={"ID":"3ca49e96-a4fc-4e54-bb55-b32d42d72734","Type":"ContainerDied","Data":"5ff6dc4880460abd2a80faeab6687c655ffedc4a52d126a82c01df74d04a5d09"} Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.405024 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.527412 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9kj\" (UniqueName: \"kubernetes.io/projected/3ca49e96-a4fc-4e54-bb55-b32d42d72734-kube-api-access-hm9kj\") pod \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.528607 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-bootstrap-combined-ca-bundle\") pod \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.528735 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-inventory\") pod \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.528850 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-ssh-key-openstack-edpm-ipam\") pod \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\" (UID: \"3ca49e96-a4fc-4e54-bb55-b32d42d72734\") " Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.533034 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3ca49e96-a4fc-4e54-bb55-b32d42d72734" (UID: "3ca49e96-a4fc-4e54-bb55-b32d42d72734"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.534465 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca49e96-a4fc-4e54-bb55-b32d42d72734-kube-api-access-hm9kj" (OuterVolumeSpecName: "kube-api-access-hm9kj") pod "3ca49e96-a4fc-4e54-bb55-b32d42d72734" (UID: "3ca49e96-a4fc-4e54-bb55-b32d42d72734"). InnerVolumeSpecName "kube-api-access-hm9kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.555948 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-inventory" (OuterVolumeSpecName: "inventory") pod "3ca49e96-a4fc-4e54-bb55-b32d42d72734" (UID: "3ca49e96-a4fc-4e54-bb55-b32d42d72734"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.569361 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ca49e96-a4fc-4e54-bb55-b32d42d72734" (UID: "3ca49e96-a4fc-4e54-bb55-b32d42d72734"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.632327 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.632370 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.632386 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm9kj\" (UniqueName: \"kubernetes.io/projected/3ca49e96-a4fc-4e54-bb55-b32d42d72734-kube-api-access-hm9kj\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.632396 4892 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca49e96-a4fc-4e54-bb55-b32d42d72734-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.988895 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" event={"ID":"3ca49e96-a4fc-4e54-bb55-b32d42d72734","Type":"ContainerDied","Data":"1964fb45a5117572caea551f91f1e317d30f6abeab7cd5da3c035e1167efa4e8"} Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.988943 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1964fb45a5117572caea551f91f1e317d30f6abeab7cd5da3c035e1167efa4e8" Jan 22 09:37:45 crc kubenswrapper[4892]: I0122 09:37:45.989036 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073424 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2"] Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073754 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="extract-utilities" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073768 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="extract-utilities" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073791 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="registry-server" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073798 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="registry-server" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073806 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="extract-content" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073813 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="extract-content" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073823 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="extract-content" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073829 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="extract-content" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073840 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="registry-server" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073846 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="registry-server" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073853 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="extract-utilities" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073859 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="extract-utilities" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.073872 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca49e96-a4fc-4e54-bb55-b32d42d72734" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.073878 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca49e96-a4fc-4e54-bb55-b32d42d72734" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.074038 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca49e96-a4fc-4e54-bb55-b32d42d72734" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.074052 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8b3c89-f210-4494-bd5a-e5f4b641e5f8" containerName="registry-server" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.074061 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="902275cf-1068-4c6f-90e4-01f11e3a89fb" containerName="registry-server" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.074623 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.084039 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.084884 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.084968 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.085076 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.099401 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2"] Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.142975 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.143072 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9kls\" (UniqueName: \"kubernetes.io/projected/9cd3e716-8070-42ec-87ad-4fc03fe2be23-kube-api-access-w9kls\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.143120 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.244752 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.244824 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9kls\" (UniqueName: \"kubernetes.io/projected/9cd3e716-8070-42ec-87ad-4fc03fe2be23-kube-api-access-w9kls\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.244860 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.248178 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.251801 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.261713 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9kls\" (UniqueName: \"kubernetes.io/projected/9cd3e716-8070-42ec-87ad-4fc03fe2be23-kube-api-access-w9kls\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-65kt2\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.323940 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.324003 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.324045 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.324642 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.324697 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" gracePeriod=600 Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.392890 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:37:46 crc kubenswrapper[4892]: E0122 09:37:46.494181 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.979571 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2"] Jan 22 09:37:46 crc kubenswrapper[4892]: I0122 09:37:46.999761 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" event={"ID":"9cd3e716-8070-42ec-87ad-4fc03fe2be23","Type":"ContainerStarted","Data":"2ac746772e2bb0728dddde7826cb9171b5a40b8ea3ccb031f9962d007500f306"} Jan 22 09:37:47 crc kubenswrapper[4892]: I0122 09:37:47.002663 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" exitCode=0 Jan 22 09:37:47 crc kubenswrapper[4892]: I0122 09:37:47.002710 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922"} Jan 22 09:37:47 crc kubenswrapper[4892]: I0122 09:37:47.002750 4892 scope.go:117] "RemoveContainer" containerID="78a6433c45938fca7e3f01a04a252af5d76315e063c7dbe1b1f8aa3b3903b18b" Jan 22 09:37:47 crc kubenswrapper[4892]: I0122 09:37:47.003695 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:37:47 crc kubenswrapper[4892]: E0122 09:37:47.004115 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.072086 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pv7r2"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.087585 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ca79-account-create-update-rhltg"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.098259 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f0f0-account-create-update-v8np7"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.108425 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jnldh"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.116442 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ca79-account-create-update-rhltg"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.126798 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pv7r2"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.136305 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e4a-account-create-update-xv7gx"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.145090 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jnldh"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.153125 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f0f0-account-create-update-v8np7"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.161135 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8e4a-account-create-update-xv7gx"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.169475 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x5mtp"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.179322 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x5mtp"] Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.431017 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1df4bd-7cb1-40b0-88f7-578961c621cb" path="/var/lib/kubelet/pods/3a1df4bd-7cb1-40b0-88f7-578961c621cb/volumes" Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.431871 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c59bcbd-6655-491c-82b4-9ca9ed61ff8c" path="/var/lib/kubelet/pods/3c59bcbd-6655-491c-82b4-9ca9ed61ff8c/volumes" Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.432415 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589bb094-75ad-4bc7-bf98-f5efaade599d" path="/var/lib/kubelet/pods/589bb094-75ad-4bc7-bf98-f5efaade599d/volumes" Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.433018 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc4eac9-d3bb-4c29-90e4-bb35bec79c96" path="/var/lib/kubelet/pods/6bc4eac9-d3bb-4c29-90e4-bb35bec79c96/volumes" Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.434068 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe42496-0b56-4cb9-a100-7098f1ecd0ae" path="/var/lib/kubelet/pods/6fe42496-0b56-4cb9-a100-7098f1ecd0ae/volumes" Jan 22 09:37:49 crc kubenswrapper[4892]: I0122 09:37:49.434915 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec122921-e058-41f5-932e-836e78d5c91e" path="/var/lib/kubelet/pods/ec122921-e058-41f5-932e-836e78d5c91e/volumes" Jan 22 09:37:50 crc kubenswrapper[4892]: I0122 09:37:50.079256 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" event={"ID":"9cd3e716-8070-42ec-87ad-4fc03fe2be23","Type":"ContainerStarted","Data":"cc59a2b643a0a7016809d4a14aee0f3c2df634556a43d460dcb10108829fd905"} Jan 22 09:37:50 crc kubenswrapper[4892]: I0122 09:37:50.098504 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" podStartSLOduration=2.388950831 podStartE2EDuration="4.098489661s" podCreationTimestamp="2026-01-22 09:37:46 +0000 UTC" firstStartedPulling="2026-01-22 09:37:46.986231255 +0000 UTC m=+1636.830310318" lastFinishedPulling="2026-01-22 09:37:48.695770085 +0000 UTC m=+1638.539849148" observedRunningTime="2026-01-22 09:37:50.096690027 +0000 UTC m=+1639.940769090" watchObservedRunningTime="2026-01-22 09:37:50.098489661 +0000 UTC m=+1639.942568724" Jan 22 09:37:53 crc kubenswrapper[4892]: I0122 09:37:53.049758 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8prsr"] Jan 22 09:37:53 crc kubenswrapper[4892]: I0122 09:37:53.061736 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8prsr"] Jan 22 09:37:53 crc kubenswrapper[4892]: I0122 09:37:53.430842 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a418ac05-1cad-45b4-a9b6-74b4db83248f" path="/var/lib/kubelet/pods/a418ac05-1cad-45b4-a9b6-74b4db83248f/volumes" Jan 22 09:38:02 crc kubenswrapper[4892]: I0122 09:38:02.419065 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:38:02 crc kubenswrapper[4892]: E0122 09:38:02.419821 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:38:13 crc kubenswrapper[4892]: I0122 09:38:13.418570 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:38:13 crc kubenswrapper[4892]: E0122 09:38:13.419417 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:38:25 crc kubenswrapper[4892]: I0122 09:38:25.418247 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:38:25 crc kubenswrapper[4892]: E0122 09:38:25.419196 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:38:34 crc kubenswrapper[4892]: I0122 09:38:34.038646 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5zrrd"] Jan 22 09:38:34 crc kubenswrapper[4892]: I0122 09:38:34.049171 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lh52x"] Jan 22 09:38:34 crc kubenswrapper[4892]: I0122 09:38:34.058212 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5zrrd"] Jan 22 09:38:34 crc kubenswrapper[4892]: I0122 09:38:34.066760 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lh52x"] Jan 22 09:38:35 crc kubenswrapper[4892]: I0122 09:38:35.434028 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4" path="/var/lib/kubelet/pods/4a71d18b-0b3b-4c0c-bcf7-9e6e024a34c4/volumes" Jan 22 09:38:35 crc kubenswrapper[4892]: I0122 09:38:35.435071 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892a47e9-2f83-4902-a210-3b23d56ad662" path="/var/lib/kubelet/pods/892a47e9-2f83-4902-a210-3b23d56ad662/volumes" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.419170 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:38:36 crc kubenswrapper[4892]: E0122 09:38:36.419554 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.676587 4892 scope.go:117] "RemoveContainer" containerID="10f0328e9097065a0095b2a4d49ae79fa35db2252bcc5d506f3ef8834793dd9b" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.735119 4892 scope.go:117] "RemoveContainer" containerID="5332bf474020455b8e38ff2cd59ac3697b1a724d1e3d2db8fce8bbe04bc8f323" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.780200 4892 scope.go:117] "RemoveContainer" containerID="e6bdf9989110bdcbe7493a35733771820dfed07321d77d9cce02845921687cf7" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.816840 4892 scope.go:117] "RemoveContainer" containerID="8564ac0ef64439d2b0c4bafeb6ed71771923e7608c9f5665e230751f0f0424cd" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.852883 4892 scope.go:117] "RemoveContainer" containerID="d88ef14bb125b501c5b1f2c747b133418527f116acd157c43883ca406c8067a1" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.900305 4892 scope.go:117] "RemoveContainer" containerID="c25908dd7f23d20fc7113b2c38ba023e8a767178e0237b2cb1f15fbb70f5bb65" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.935349 4892 scope.go:117] "RemoveContainer" containerID="201bac64f24ee694da1bc396467266b71da0c341c817ab543cb7f02b2c98a970" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.953896 4892 scope.go:117] "RemoveContainer" containerID="e3c2e779e5dcd1cee2c997526478080cdede4a0735aa75097bd9e085e344773b" Jan 22 09:38:36 crc kubenswrapper[4892]: I0122 09:38:36.973043 4892 scope.go:117] "RemoveContainer" containerID="18acfe35740859100c7341047b7e931f59f03ad4182587d633bad127393e567b" Jan 22 09:38:39 crc kubenswrapper[4892]: I0122 09:38:39.029165 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5zvl8"] Jan 22 09:38:39 crc kubenswrapper[4892]: I0122 09:38:39.037229 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5zvl8"] Jan 22 09:38:39 crc kubenswrapper[4892]: I0122 09:38:39.428809 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ba4135-00fc-4891-bad5-e2e666eabd91" path="/var/lib/kubelet/pods/06ba4135-00fc-4891-bad5-e2e666eabd91/volumes" Jan 22 09:38:47 crc kubenswrapper[4892]: I0122 09:38:47.026889 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lnmlq"] Jan 22 09:38:47 crc kubenswrapper[4892]: I0122 09:38:47.036267 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lnmlq"] Jan 22 09:38:47 crc kubenswrapper[4892]: I0122 09:38:47.046174 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7gn6t"] Jan 22 09:38:47 crc kubenswrapper[4892]: I0122 09:38:47.055567 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7gn6t"] Jan 22 09:38:47 crc kubenswrapper[4892]: I0122 09:38:47.449667 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa10241-be09-4db5-894b-845654f34a21" path="/var/lib/kubelet/pods/7fa10241-be09-4db5-894b-845654f34a21/volumes" Jan 22 09:38:47 crc kubenswrapper[4892]: I0122 09:38:47.451491 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8998452c-d0f3-42a2-8741-c70ffe854fda" path="/var/lib/kubelet/pods/8998452c-d0f3-42a2-8741-c70ffe854fda/volumes" Jan 22 09:38:48 crc kubenswrapper[4892]: I0122 09:38:48.419786 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:38:48 crc kubenswrapper[4892]: E0122 09:38:48.420138 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:39:00 crc kubenswrapper[4892]: I0122 09:39:00.419550 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:39:00 crc kubenswrapper[4892]: E0122 09:39:00.420220 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:39:12 crc kubenswrapper[4892]: I0122 09:39:12.419258 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:39:12 crc kubenswrapper[4892]: E0122 09:39:12.420049 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:39:23 crc kubenswrapper[4892]: I0122 09:39:23.418566 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:39:23 crc kubenswrapper[4892]: E0122 09:39:23.419498 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:39:27 crc kubenswrapper[4892]: I0122 09:39:27.043330 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pxmk9"] Jan 22 09:39:27 crc kubenswrapper[4892]: I0122 09:39:27.053217 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pxmk9"] Jan 22 09:39:27 crc kubenswrapper[4892]: I0122 09:39:27.433223 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35dbe302-2a50-49e2-874a-b3bfe80bb483" path="/var/lib/kubelet/pods/35dbe302-2a50-49e2-874a-b3bfe80bb483/volumes" Jan 22 09:39:28 crc kubenswrapper[4892]: I0122 09:39:28.039974 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w9xkt"] Jan 22 09:39:28 crc kubenswrapper[4892]: I0122 09:39:28.051055 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ltg2c"] Jan 22 09:39:28 crc kubenswrapper[4892]: I0122 09:39:28.058131 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w9xkt"] Jan 22 09:39:28 crc kubenswrapper[4892]: I0122 09:39:28.064721 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ltg2c"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.034626 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3d72-account-create-update-phg7j"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.045801 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6c9a-account-create-update-8ldll"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.055816 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f45c-account-create-update-j2b4k"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.063041 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f45c-account-create-update-j2b4k"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.070756 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3d72-account-create-update-phg7j"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.077593 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6c9a-account-create-update-8ldll"] Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.442543 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294650da-35b8-402b-86ca-722359f803f8" path="/var/lib/kubelet/pods/294650da-35b8-402b-86ca-722359f803f8/volumes" Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.451778 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bdf833-f1d4-4c50-9710-0453e093b082" path="/var/lib/kubelet/pods/58bdf833-f1d4-4c50-9710-0453e093b082/volumes" Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.453239 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962154cf-811c-41cc-a43a-e63af5139dc8" path="/var/lib/kubelet/pods/962154cf-811c-41cc-a43a-e63af5139dc8/volumes" Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.454068 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08dee6e-948e-480a-8642-ee350e0a05f1" path="/var/lib/kubelet/pods/b08dee6e-948e-480a-8642-ee350e0a05f1/volumes" Jan 22 09:39:29 crc kubenswrapper[4892]: I0122 09:39:29.455190 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb" path="/var/lib/kubelet/pods/f25f8cf3-f9e4-49a2-a51b-145e6bf91ebb/volumes" Jan 22 09:39:36 crc kubenswrapper[4892]: I0122 09:39:36.418800 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:39:36 crc kubenswrapper[4892]: E0122 09:39:36.419938 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.116444 4892 scope.go:117] "RemoveContainer" containerID="3773b362a7a23bf787f49f103b98b8f0498a36edd2ddd34639b7d7dd4069321a" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.624186 4892 scope.go:117] "RemoveContainer" containerID="95a411c8470faa5dd5f02ad37728c0582533d2ccc1cb27325759abc3f7a707d0" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.651322 4892 scope.go:117] "RemoveContainer" containerID="1bd370dcccf7e6a6d737be0642b7b4c16ecbf0b887c971f8f29a9fb94836a665" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.714532 4892 scope.go:117] "RemoveContainer" containerID="d3ffe4c077a923df65a6d90f20f7710d12ac358b6c20db9d26af90c6bca7eb85" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.771885 4892 scope.go:117] "RemoveContainer" containerID="5060f249a60748b47dfbd46df678cc5143123eb7368e6309d40a896bee2771d1" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.825596 4892 scope.go:117] "RemoveContainer" containerID="7ff7e2459ca349500abce67943fe5cc8aca33d9a9a4aa65f5c8f9d48613cc7d1" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.851747 4892 scope.go:117] "RemoveContainer" containerID="bb64dd8001bc36afa6de8eab0183e26ee4663489abe299392f350142d6fcb6f8" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.873723 4892 scope.go:117] "RemoveContainer" containerID="d1fdb170b377c8ba3490763922a87c3734e0abc7857942e68bf6fd68b5f99015" Jan 22 09:39:37 crc kubenswrapper[4892]: I0122 09:39:37.900272 4892 scope.go:117] "RemoveContainer" containerID="bffd5591c6aea395c0c24ad45e178f44c9380bc6476f6ab41c1de9f5edbfabe7" Jan 22 09:39:47 crc kubenswrapper[4892]: I0122 09:39:47.419019 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:39:47 crc kubenswrapper[4892]: E0122 09:39:47.419877 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:40:00 crc kubenswrapper[4892]: I0122 09:40:00.044449 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hwvgb"] Jan 22 09:40:00 crc kubenswrapper[4892]: I0122 09:40:00.051860 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hwvgb"] Jan 22 09:40:01 crc kubenswrapper[4892]: I0122 09:40:01.438684 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606e5e49-0a85-4337-8aa2-12216467367e" path="/var/lib/kubelet/pods/606e5e49-0a85-4337-8aa2-12216467367e/volumes" Jan 22 09:40:02 crc kubenswrapper[4892]: I0122 09:40:02.419177 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:40:02 crc kubenswrapper[4892]: E0122 09:40:02.420007 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:40:16 crc kubenswrapper[4892]: I0122 09:40:16.418685 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:40:16 crc kubenswrapper[4892]: E0122 09:40:16.419521 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:40:23 crc kubenswrapper[4892]: I0122 09:40:23.026773 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kkq2t"] Jan 22 09:40:23 crc kubenswrapper[4892]: I0122 09:40:23.035459 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kkq2t"] Jan 22 09:40:23 crc kubenswrapper[4892]: I0122 09:40:23.428979 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95183f6-1315-4165-9659-6d1c77f3f9bd" path="/var/lib/kubelet/pods/f95183f6-1315-4165-9659-6d1c77f3f9bd/volumes" Jan 22 09:40:24 crc kubenswrapper[4892]: I0122 09:40:24.042813 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pttx4"] Jan 22 09:40:24 crc kubenswrapper[4892]: I0122 09:40:24.054367 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pttx4"] Jan 22 09:40:25 crc kubenswrapper[4892]: I0122 09:40:25.435650 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af63861-4d6e-46cb-ad5b-0b0161c47cb9" path="/var/lib/kubelet/pods/9af63861-4d6e-46cb-ad5b-0b0161c47cb9/volumes" Jan 22 09:40:28 crc kubenswrapper[4892]: I0122 09:40:28.418629 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:40:28 crc kubenswrapper[4892]: E0122 09:40:28.419957 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:40:34 crc kubenswrapper[4892]: I0122 09:40:34.923743 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-swnlf"] Jan 22 09:40:34 crc kubenswrapper[4892]: I0122 09:40:34.926683 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:34 crc kubenswrapper[4892]: I0122 09:40:34.937635 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swnlf"] Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.001435 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4kq\" (UniqueName: \"kubernetes.io/projected/84292f7c-d604-45bb-a0c2-ecab659b6d01-kube-api-access-fd4kq\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.001522 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-catalog-content\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.001625 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-utilities\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.102935 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-catalog-content\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.103043 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-utilities\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.103194 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4kq\" (UniqueName: \"kubernetes.io/projected/84292f7c-d604-45bb-a0c2-ecab659b6d01-kube-api-access-fd4kq\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.103532 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-catalog-content\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.103593 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-utilities\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.134049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4kq\" (UniqueName: \"kubernetes.io/projected/84292f7c-d604-45bb-a0c2-ecab659b6d01-kube-api-access-fd4kq\") pod \"community-operators-swnlf\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.250055 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:35 crc kubenswrapper[4892]: I0122 09:40:35.807867 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swnlf"] Jan 22 09:40:36 crc kubenswrapper[4892]: I0122 09:40:36.580191 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swnlf" event={"ID":"84292f7c-d604-45bb-a0c2-ecab659b6d01","Type":"ContainerStarted","Data":"3c3aaf66a2ea33101ec3a4dda457264c7d78542d13053c2689ad6f01ee1ec9b7"} Jan 22 09:40:37 crc kubenswrapper[4892]: I0122 09:40:37.590329 4892 generic.go:334] "Generic (PLEG): container finished" podID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerID="3a14b679348b5a4d5d8c87ce465ed58141947f0e8b9da90ea196e9de086b9bf0" exitCode=0 Jan 22 09:40:37 crc kubenswrapper[4892]: I0122 09:40:37.590429 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swnlf" event={"ID":"84292f7c-d604-45bb-a0c2-ecab659b6d01","Type":"ContainerDied","Data":"3a14b679348b5a4d5d8c87ce465ed58141947f0e8b9da90ea196e9de086b9bf0"} Jan 22 09:40:38 crc kubenswrapper[4892]: I0122 09:40:38.115705 4892 scope.go:117] "RemoveContainer" containerID="b2a714e0373a0fd3ac2703eaf46fd10700b0d520c0e832693f53c7a7da1f47ba" Jan 22 09:40:38 crc kubenswrapper[4892]: I0122 09:40:38.345324 4892 scope.go:117] "RemoveContainer" containerID="633c083489e5e6b148a2be15a7b2c054065f30aee0fd2bbf3554e768144ef9f1" Jan 22 09:40:38 crc kubenswrapper[4892]: I0122 09:40:38.393695 4892 scope.go:117] "RemoveContainer" containerID="901500cbf1262ad90b04e55b8170f2240c2451a2c5587e3d938ad7894b2a31b7" Jan 22 09:40:39 crc kubenswrapper[4892]: I0122 09:40:39.419414 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:40:39 crc kubenswrapper[4892]: E0122 09:40:39.419843 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:40:46 crc kubenswrapper[4892]: I0122 09:40:46.688208 4892 generic.go:334] "Generic (PLEG): container finished" podID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerID="48bb0847d42f0e3e0c04edf5a2c5f92940694232d19a360b80e7660e0df952ac" exitCode=0 Jan 22 09:40:46 crc kubenswrapper[4892]: I0122 09:40:46.688346 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swnlf" event={"ID":"84292f7c-d604-45bb-a0c2-ecab659b6d01","Type":"ContainerDied","Data":"48bb0847d42f0e3e0c04edf5a2c5f92940694232d19a360b80e7660e0df952ac"} Jan 22 09:40:48 crc kubenswrapper[4892]: I0122 09:40:48.709604 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swnlf" event={"ID":"84292f7c-d604-45bb-a0c2-ecab659b6d01","Type":"ContainerStarted","Data":"ffcef8a46d48e3be116f67c634b9244a7d635a0ed6dad68bdb3a7e3ebefb629c"} Jan 22 09:40:48 crc kubenswrapper[4892]: I0122 09:40:48.734578 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-swnlf" podStartSLOduration=4.7386710579999995 podStartE2EDuration="14.734557377s" podCreationTimestamp="2026-01-22 09:40:34 +0000 UTC" firstStartedPulling="2026-01-22 09:40:37.59297126 +0000 UTC m=+1807.437050323" lastFinishedPulling="2026-01-22 09:40:47.588857579 +0000 UTC m=+1817.432936642" observedRunningTime="2026-01-22 09:40:48.729716288 +0000 UTC m=+1818.573795351" watchObservedRunningTime="2026-01-22 09:40:48.734557377 +0000 UTC m=+1818.578636440" Jan 22 09:40:54 crc kubenswrapper[4892]: I0122 09:40:54.419567 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:40:54 crc kubenswrapper[4892]: E0122 09:40:54.420360 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:40:55 crc kubenswrapper[4892]: I0122 09:40:55.250979 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:55 crc kubenswrapper[4892]: I0122 09:40:55.251045 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:55 crc kubenswrapper[4892]: I0122 09:40:55.295729 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:55 crc kubenswrapper[4892]: I0122 09:40:55.813804 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:40:55 crc kubenswrapper[4892]: I0122 09:40:55.853627 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swnlf"] Jan 22 09:40:57 crc kubenswrapper[4892]: I0122 09:40:57.784787 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-swnlf" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="registry-server" containerID="cri-o://ffcef8a46d48e3be116f67c634b9244a7d635a0ed6dad68bdb3a7e3ebefb629c" gracePeriod=2 Jan 22 09:41:00 crc kubenswrapper[4892]: I0122 09:41:00.811595 4892 generic.go:334] "Generic (PLEG): container finished" podID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerID="ffcef8a46d48e3be116f67c634b9244a7d635a0ed6dad68bdb3a7e3ebefb629c" exitCode=0 Jan 22 09:41:00 crc kubenswrapper[4892]: I0122 09:41:00.811696 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swnlf" event={"ID":"84292f7c-d604-45bb-a0c2-ecab659b6d01","Type":"ContainerDied","Data":"ffcef8a46d48e3be116f67c634b9244a7d635a0ed6dad68bdb3a7e3ebefb629c"} Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.267637 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.353899 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-utilities\") pod \"84292f7c-d604-45bb-a0c2-ecab659b6d01\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.354319 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-catalog-content\") pod \"84292f7c-d604-45bb-a0c2-ecab659b6d01\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.354404 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd4kq\" (UniqueName: \"kubernetes.io/projected/84292f7c-d604-45bb-a0c2-ecab659b6d01-kube-api-access-fd4kq\") pod \"84292f7c-d604-45bb-a0c2-ecab659b6d01\" (UID: \"84292f7c-d604-45bb-a0c2-ecab659b6d01\") " Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.357937 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-utilities" (OuterVolumeSpecName: "utilities") pod "84292f7c-d604-45bb-a0c2-ecab659b6d01" (UID: "84292f7c-d604-45bb-a0c2-ecab659b6d01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.363066 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84292f7c-d604-45bb-a0c2-ecab659b6d01-kube-api-access-fd4kq" (OuterVolumeSpecName: "kube-api-access-fd4kq") pod "84292f7c-d604-45bb-a0c2-ecab659b6d01" (UID: "84292f7c-d604-45bb-a0c2-ecab659b6d01"). InnerVolumeSpecName "kube-api-access-fd4kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.416660 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84292f7c-d604-45bb-a0c2-ecab659b6d01" (UID: "84292f7c-d604-45bb-a0c2-ecab659b6d01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.456157 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.456186 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84292f7c-d604-45bb-a0c2-ecab659b6d01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.456198 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd4kq\" (UniqueName: \"kubernetes.io/projected/84292f7c-d604-45bb-a0c2-ecab659b6d01-kube-api-access-fd4kq\") on node \"crc\" DevicePath \"\"" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.821782 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swnlf" event={"ID":"84292f7c-d604-45bb-a0c2-ecab659b6d01","Type":"ContainerDied","Data":"3c3aaf66a2ea33101ec3a4dda457264c7d78542d13053c2689ad6f01ee1ec9b7"} Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.821842 4892 scope.go:117] "RemoveContainer" containerID="ffcef8a46d48e3be116f67c634b9244a7d635a0ed6dad68bdb3a7e3ebefb629c" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.821970 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swnlf" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.860567 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swnlf"] Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.866169 4892 scope.go:117] "RemoveContainer" containerID="48bb0847d42f0e3e0c04edf5a2c5f92940694232d19a360b80e7660e0df952ac" Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.874083 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-swnlf"] Jan 22 09:41:01 crc kubenswrapper[4892]: I0122 09:41:01.896273 4892 scope.go:117] "RemoveContainer" containerID="3a14b679348b5a4d5d8c87ce465ed58141947f0e8b9da90ea196e9de086b9bf0" Jan 22 09:41:03 crc kubenswrapper[4892]: I0122 09:41:03.436988 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" path="/var/lib/kubelet/pods/84292f7c-d604-45bb-a0c2-ecab659b6d01/volumes" Jan 22 09:41:07 crc kubenswrapper[4892]: I0122 09:41:07.419574 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:41:07 crc kubenswrapper[4892]: E0122 09:41:07.420713 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:41:08 crc kubenswrapper[4892]: I0122 09:41:08.042517 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-86jc5"] Jan 22 09:41:08 crc kubenswrapper[4892]: I0122 09:41:08.050674 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-86jc5"] Jan 22 09:41:09 crc kubenswrapper[4892]: I0122 09:41:09.434039 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180a0abc-388c-4a6a-bd24-91a416481a38" path="/var/lib/kubelet/pods/180a0abc-388c-4a6a-bd24-91a416481a38/volumes" Jan 22 09:41:18 crc kubenswrapper[4892]: I0122 09:41:18.418489 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:41:18 crc kubenswrapper[4892]: E0122 09:41:18.419266 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:41:18 crc kubenswrapper[4892]: I0122 09:41:18.955685 4892 generic.go:334] "Generic (PLEG): container finished" podID="9cd3e716-8070-42ec-87ad-4fc03fe2be23" containerID="cc59a2b643a0a7016809d4a14aee0f3c2df634556a43d460dcb10108829fd905" exitCode=0 Jan 22 09:41:18 crc kubenswrapper[4892]: I0122 09:41:18.955733 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" event={"ID":"9cd3e716-8070-42ec-87ad-4fc03fe2be23","Type":"ContainerDied","Data":"cc59a2b643a0a7016809d4a14aee0f3c2df634556a43d460dcb10108829fd905"} Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.366678 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.458007 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-inventory\") pod \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.458448 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-ssh-key-openstack-edpm-ipam\") pod \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.458558 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9kls\" (UniqueName: \"kubernetes.io/projected/9cd3e716-8070-42ec-87ad-4fc03fe2be23-kube-api-access-w9kls\") pod \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\" (UID: \"9cd3e716-8070-42ec-87ad-4fc03fe2be23\") " Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.463875 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd3e716-8070-42ec-87ad-4fc03fe2be23-kube-api-access-w9kls" (OuterVolumeSpecName: "kube-api-access-w9kls") pod "9cd3e716-8070-42ec-87ad-4fc03fe2be23" (UID: "9cd3e716-8070-42ec-87ad-4fc03fe2be23"). InnerVolumeSpecName "kube-api-access-w9kls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.488895 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9cd3e716-8070-42ec-87ad-4fc03fe2be23" (UID: "9cd3e716-8070-42ec-87ad-4fc03fe2be23"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.488968 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-inventory" (OuterVolumeSpecName: "inventory") pod "9cd3e716-8070-42ec-87ad-4fc03fe2be23" (UID: "9cd3e716-8070-42ec-87ad-4fc03fe2be23"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.560521 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9kls\" (UniqueName: \"kubernetes.io/projected/9cd3e716-8070-42ec-87ad-4fc03fe2be23-kube-api-access-w9kls\") on node \"crc\" DevicePath \"\"" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.560562 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.560575 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cd3e716-8070-42ec-87ad-4fc03fe2be23-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.975112 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" event={"ID":"9cd3e716-8070-42ec-87ad-4fc03fe2be23","Type":"ContainerDied","Data":"2ac746772e2bb0728dddde7826cb9171b5a40b8ea3ccb031f9962d007500f306"} Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.975162 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ac746772e2bb0728dddde7826cb9171b5a40b8ea3ccb031f9962d007500f306" Jan 22 09:41:20 crc kubenswrapper[4892]: I0122 09:41:20.975171 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-65kt2" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.066674 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj"] Jan 22 09:41:21 crc kubenswrapper[4892]: E0122 09:41:21.067260 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="registry-server" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.067319 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="registry-server" Jan 22 09:41:21 crc kubenswrapper[4892]: E0122 09:41:21.067340 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="extract-content" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.067537 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="extract-content" Jan 22 09:41:21 crc kubenswrapper[4892]: E0122 09:41:21.067572 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="extract-utilities" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.067584 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="extract-utilities" Jan 22 09:41:21 crc kubenswrapper[4892]: E0122 09:41:21.067602 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd3e716-8070-42ec-87ad-4fc03fe2be23" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.067612 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd3e716-8070-42ec-87ad-4fc03fe2be23" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.067827 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="84292f7c-d604-45bb-a0c2-ecab659b6d01" containerName="registry-server" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.067846 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd3e716-8070-42ec-87ad-4fc03fe2be23" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.068825 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.071392 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.071877 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.072986 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.074177 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.076698 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj"] Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.170770 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.170823 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.170915 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wgt\" (UniqueName: \"kubernetes.io/projected/ab573651-bad0-413d-9c16-46aac4818b9b-kube-api-access-k6wgt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.272504 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.273027 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.273381 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wgt\" (UniqueName: \"kubernetes.io/projected/ab573651-bad0-413d-9c16-46aac4818b9b-kube-api-access-k6wgt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.276164 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.284931 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.290729 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wgt\" (UniqueName: \"kubernetes.io/projected/ab573651-bad0-413d-9c16-46aac4818b9b-kube-api-access-k6wgt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.441310 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.973521 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj"] Jan 22 09:41:21 crc kubenswrapper[4892]: I0122 09:41:21.987120 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:41:22 crc kubenswrapper[4892]: I0122 09:41:22.991531 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" event={"ID":"ab573651-bad0-413d-9c16-46aac4818b9b","Type":"ContainerStarted","Data":"f0e5cd292fbaf6b1785f7980bf900de155eec8df4098eaed992828cb611ee2bd"} Jan 22 09:41:26 crc kubenswrapper[4892]: I0122 09:41:26.019345 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" event={"ID":"ab573651-bad0-413d-9c16-46aac4818b9b","Type":"ContainerStarted","Data":"64da9433fc38467c42ad6e271b0ad819bb4ce7a9b2d100b8fcdb6d828abf3916"} Jan 22 09:41:26 crc kubenswrapper[4892]: I0122 09:41:26.039335 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" podStartSLOduration=2.418931141 podStartE2EDuration="5.039313647s" podCreationTimestamp="2026-01-22 09:41:21 +0000 UTC" firstStartedPulling="2026-01-22 09:41:21.986915233 +0000 UTC m=+1851.830994296" lastFinishedPulling="2026-01-22 09:41:24.607297749 +0000 UTC m=+1854.451376802" observedRunningTime="2026-01-22 09:41:26.034664342 +0000 UTC m=+1855.878743415" watchObservedRunningTime="2026-01-22 09:41:26.039313647 +0000 UTC m=+1855.883392710" Jan 22 09:41:31 crc kubenswrapper[4892]: I0122 09:41:31.425347 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:41:31 crc kubenswrapper[4892]: E0122 09:41:31.426180 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:41:38 crc kubenswrapper[4892]: I0122 09:41:38.484502 4892 scope.go:117] "RemoveContainer" containerID="f21ff5de1f4cad085f5b1c2c542fff98cd191d78b3c83e1f235e31deb9a6be5b" Jan 22 09:41:45 crc kubenswrapper[4892]: I0122 09:41:45.419915 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:41:45 crc kubenswrapper[4892]: E0122 09:41:45.420597 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:41:57 crc kubenswrapper[4892]: I0122 09:41:57.418572 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:41:57 crc kubenswrapper[4892]: E0122 09:41:57.419414 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:42:09 crc kubenswrapper[4892]: I0122 09:42:09.418911 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:42:09 crc kubenswrapper[4892]: E0122 09:42:09.421516 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:42:23 crc kubenswrapper[4892]: I0122 09:42:23.419036 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:42:23 crc kubenswrapper[4892]: E0122 09:42:23.420078 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:42:36 crc kubenswrapper[4892]: I0122 09:42:36.418872 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:42:36 crc kubenswrapper[4892]: E0122 09:42:36.419654 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:42:48 crc kubenswrapper[4892]: I0122 09:42:48.419600 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:42:49 crc kubenswrapper[4892]: I0122 09:42:49.781987 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"96413b14691349eddf0937a4dcff7c33027ccb1e003cd6440c2c014b672da430"} Jan 22 09:43:07 crc kubenswrapper[4892]: I0122 09:43:07.929465 4892 generic.go:334] "Generic (PLEG): container finished" podID="ab573651-bad0-413d-9c16-46aac4818b9b" containerID="64da9433fc38467c42ad6e271b0ad819bb4ce7a9b2d100b8fcdb6d828abf3916" exitCode=0 Jan 22 09:43:07 crc kubenswrapper[4892]: I0122 09:43:07.930097 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" event={"ID":"ab573651-bad0-413d-9c16-46aac4818b9b","Type":"ContainerDied","Data":"64da9433fc38467c42ad6e271b0ad819bb4ce7a9b2d100b8fcdb6d828abf3916"} Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.336757 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.476564 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wgt\" (UniqueName: \"kubernetes.io/projected/ab573651-bad0-413d-9c16-46aac4818b9b-kube-api-access-k6wgt\") pod \"ab573651-bad0-413d-9c16-46aac4818b9b\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.476793 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-ssh-key-openstack-edpm-ipam\") pod \"ab573651-bad0-413d-9c16-46aac4818b9b\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.476815 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-inventory\") pod \"ab573651-bad0-413d-9c16-46aac4818b9b\" (UID: \"ab573651-bad0-413d-9c16-46aac4818b9b\") " Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.487703 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab573651-bad0-413d-9c16-46aac4818b9b-kube-api-access-k6wgt" (OuterVolumeSpecName: "kube-api-access-k6wgt") pod "ab573651-bad0-413d-9c16-46aac4818b9b" (UID: "ab573651-bad0-413d-9c16-46aac4818b9b"). InnerVolumeSpecName "kube-api-access-k6wgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.508361 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab573651-bad0-413d-9c16-46aac4818b9b" (UID: "ab573651-bad0-413d-9c16-46aac4818b9b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.510437 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-inventory" (OuterVolumeSpecName: "inventory") pod "ab573651-bad0-413d-9c16-46aac4818b9b" (UID: "ab573651-bad0-413d-9c16-46aac4818b9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.578811 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.579087 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab573651-bad0-413d-9c16-46aac4818b9b-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.579101 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wgt\" (UniqueName: \"kubernetes.io/projected/ab573651-bad0-413d-9c16-46aac4818b9b-kube-api-access-k6wgt\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.956066 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" event={"ID":"ab573651-bad0-413d-9c16-46aac4818b9b","Type":"ContainerDied","Data":"f0e5cd292fbaf6b1785f7980bf900de155eec8df4098eaed992828cb611ee2bd"} Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.956105 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e5cd292fbaf6b1785f7980bf900de155eec8df4098eaed992828cb611ee2bd" Jan 22 09:43:09 crc kubenswrapper[4892]: I0122 09:43:09.956164 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.044986 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t"] Jan 22 09:43:10 crc kubenswrapper[4892]: E0122 09:43:10.045485 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab573651-bad0-413d-9c16-46aac4818b9b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.045515 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab573651-bad0-413d-9c16-46aac4818b9b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.045680 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab573651-bad0-413d-9c16-46aac4818b9b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.046383 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.048844 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.049764 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.050904 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.056639 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.057455 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t"] Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.192784 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.192866 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.193449 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdh4x\" (UniqueName: \"kubernetes.io/projected/49d64b56-37f0-45f2-8aec-a3dfbf171f09-kube-api-access-hdh4x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.295448 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.295577 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.295600 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdh4x\" (UniqueName: \"kubernetes.io/projected/49d64b56-37f0-45f2-8aec-a3dfbf171f09-kube-api-access-hdh4x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.300030 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.300891 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.312389 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdh4x\" (UniqueName: \"kubernetes.io/projected/49d64b56-37f0-45f2-8aec-a3dfbf171f09-kube-api-access-hdh4x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:10 crc kubenswrapper[4892]: I0122 09:43:10.362811 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:11 crc kubenswrapper[4892]: I0122 09:43:11.230066 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t"] Jan 22 09:43:11 crc kubenswrapper[4892]: I0122 09:43:11.970507 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" event={"ID":"49d64b56-37f0-45f2-8aec-a3dfbf171f09","Type":"ContainerStarted","Data":"2c0415ae4782d7b0028485bd767caa257837df8672687a99ff23299218708b7f"} Jan 22 09:43:17 crc kubenswrapper[4892]: I0122 09:43:17.019903 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" event={"ID":"49d64b56-37f0-45f2-8aec-a3dfbf171f09","Type":"ContainerStarted","Data":"fc2ed94db3107acf209f7c09f02d2eba8d0f98d1d9522fa835a0ca9d7d9709f9"} Jan 22 09:43:17 crc kubenswrapper[4892]: I0122 09:43:17.037362 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" podStartSLOduration=2.877693869 podStartE2EDuration="7.037340043s" podCreationTimestamp="2026-01-22 09:43:10 +0000 UTC" firstStartedPulling="2026-01-22 09:43:11.230103918 +0000 UTC m=+1961.074182981" lastFinishedPulling="2026-01-22 09:43:15.389750092 +0000 UTC m=+1965.233829155" observedRunningTime="2026-01-22 09:43:17.033447527 +0000 UTC m=+1966.877526590" watchObservedRunningTime="2026-01-22 09:43:17.037340043 +0000 UTC m=+1966.881419126" Jan 22 09:43:22 crc kubenswrapper[4892]: I0122 09:43:22.062388 4892 generic.go:334] "Generic (PLEG): container finished" podID="49d64b56-37f0-45f2-8aec-a3dfbf171f09" containerID="fc2ed94db3107acf209f7c09f02d2eba8d0f98d1d9522fa835a0ca9d7d9709f9" exitCode=0 Jan 22 09:43:22 crc kubenswrapper[4892]: I0122 09:43:22.062458 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" event={"ID":"49d64b56-37f0-45f2-8aec-a3dfbf171f09","Type":"ContainerDied","Data":"fc2ed94db3107acf209f7c09f02d2eba8d0f98d1d9522fa835a0ca9d7d9709f9"} Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.441022 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.487705 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-inventory\") pod \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.487879 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdh4x\" (UniqueName: \"kubernetes.io/projected/49d64b56-37f0-45f2-8aec-a3dfbf171f09-kube-api-access-hdh4x\") pod \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.487947 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-ssh-key-openstack-edpm-ipam\") pod \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\" (UID: \"49d64b56-37f0-45f2-8aec-a3dfbf171f09\") " Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.493858 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d64b56-37f0-45f2-8aec-a3dfbf171f09-kube-api-access-hdh4x" (OuterVolumeSpecName: "kube-api-access-hdh4x") pod "49d64b56-37f0-45f2-8aec-a3dfbf171f09" (UID: "49d64b56-37f0-45f2-8aec-a3dfbf171f09"). InnerVolumeSpecName "kube-api-access-hdh4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.513522 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-inventory" (OuterVolumeSpecName: "inventory") pod "49d64b56-37f0-45f2-8aec-a3dfbf171f09" (UID: "49d64b56-37f0-45f2-8aec-a3dfbf171f09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.514917 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49d64b56-37f0-45f2-8aec-a3dfbf171f09" (UID: "49d64b56-37f0-45f2-8aec-a3dfbf171f09"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.589785 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.589814 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdh4x\" (UniqueName: \"kubernetes.io/projected/49d64b56-37f0-45f2-8aec-a3dfbf171f09-kube-api-access-hdh4x\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:23 crc kubenswrapper[4892]: I0122 09:43:23.589849 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d64b56-37f0-45f2-8aec-a3dfbf171f09-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.080683 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" event={"ID":"49d64b56-37f0-45f2-8aec-a3dfbf171f09","Type":"ContainerDied","Data":"2c0415ae4782d7b0028485bd767caa257837df8672687a99ff23299218708b7f"} Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.080725 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0415ae4782d7b0028485bd767caa257837df8672687a99ff23299218708b7f" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.080726 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.162650 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd"] Jan 22 09:43:24 crc kubenswrapper[4892]: E0122 09:43:24.163393 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d64b56-37f0-45f2-8aec-a3dfbf171f09" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.163417 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d64b56-37f0-45f2-8aec-a3dfbf171f09" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.163656 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d64b56-37f0-45f2-8aec-a3dfbf171f09" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.164506 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.169501 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.169564 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.169598 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.169920 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.174764 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd"] Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.201375 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.201491 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x45xt\" (UniqueName: \"kubernetes.io/projected/402a9581-6783-46e0-8147-2e443d9a0608-kube-api-access-x45xt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.201546 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.303650 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.303781 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x45xt\" (UniqueName: \"kubernetes.io/projected/402a9581-6783-46e0-8147-2e443d9a0608-kube-api-access-x45xt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.303830 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.308882 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.308890 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.322217 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x45xt\" (UniqueName: \"kubernetes.io/projected/402a9581-6783-46e0-8147-2e443d9a0608-kube-api-access-x45xt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mdwhd\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:24 crc kubenswrapper[4892]: I0122 09:43:24.520397 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:43:25 crc kubenswrapper[4892]: I0122 09:43:25.007005 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd"] Jan 22 09:43:25 crc kubenswrapper[4892]: I0122 09:43:25.088376 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" event={"ID":"402a9581-6783-46e0-8147-2e443d9a0608","Type":"ContainerStarted","Data":"5493e557a88508e84b49b46778e41dc53a2099eef7d8cec66afc31e7d1f36f33"} Jan 22 09:43:27 crc kubenswrapper[4892]: I0122 09:43:27.104792 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" event={"ID":"402a9581-6783-46e0-8147-2e443d9a0608","Type":"ContainerStarted","Data":"a30848f4105395e558e7da13d08b081886fc9ec3b732e2ca3ff3a58c60930bef"} Jan 22 09:43:27 crc kubenswrapper[4892]: I0122 09:43:27.128322 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" podStartSLOduration=2.235273542 podStartE2EDuration="3.128300771s" podCreationTimestamp="2026-01-22 09:43:24 +0000 UTC" firstStartedPulling="2026-01-22 09:43:25.018710494 +0000 UTC m=+1974.862789557" lastFinishedPulling="2026-01-22 09:43:25.911737723 +0000 UTC m=+1975.755816786" observedRunningTime="2026-01-22 09:43:27.117331661 +0000 UTC m=+1976.961410724" watchObservedRunningTime="2026-01-22 09:43:27.128300771 +0000 UTC m=+1976.972379834" Jan 22 09:44:05 crc kubenswrapper[4892]: I0122 09:44:05.539526 4892 generic.go:334] "Generic (PLEG): container finished" podID="402a9581-6783-46e0-8147-2e443d9a0608" containerID="a30848f4105395e558e7da13d08b081886fc9ec3b732e2ca3ff3a58c60930bef" exitCode=0 Jan 22 09:44:05 crc kubenswrapper[4892]: I0122 09:44:05.539714 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" event={"ID":"402a9581-6783-46e0-8147-2e443d9a0608","Type":"ContainerDied","Data":"a30848f4105395e558e7da13d08b081886fc9ec3b732e2ca3ff3a58c60930bef"} Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.023355 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.192478 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x45xt\" (UniqueName: \"kubernetes.io/projected/402a9581-6783-46e0-8147-2e443d9a0608-kube-api-access-x45xt\") pod \"402a9581-6783-46e0-8147-2e443d9a0608\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.192542 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-ssh-key-openstack-edpm-ipam\") pod \"402a9581-6783-46e0-8147-2e443d9a0608\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.192760 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-inventory\") pod \"402a9581-6783-46e0-8147-2e443d9a0608\" (UID: \"402a9581-6783-46e0-8147-2e443d9a0608\") " Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.198617 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402a9581-6783-46e0-8147-2e443d9a0608-kube-api-access-x45xt" (OuterVolumeSpecName: "kube-api-access-x45xt") pod "402a9581-6783-46e0-8147-2e443d9a0608" (UID: "402a9581-6783-46e0-8147-2e443d9a0608"). InnerVolumeSpecName "kube-api-access-x45xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.220217 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "402a9581-6783-46e0-8147-2e443d9a0608" (UID: "402a9581-6783-46e0-8147-2e443d9a0608"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.223337 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-inventory" (OuterVolumeSpecName: "inventory") pod "402a9581-6783-46e0-8147-2e443d9a0608" (UID: "402a9581-6783-46e0-8147-2e443d9a0608"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.295425 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x45xt\" (UniqueName: \"kubernetes.io/projected/402a9581-6783-46e0-8147-2e443d9a0608-kube-api-access-x45xt\") on node \"crc\" DevicePath \"\"" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.295584 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.295767 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/402a9581-6783-46e0-8147-2e443d9a0608-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.558942 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" event={"ID":"402a9581-6783-46e0-8147-2e443d9a0608","Type":"ContainerDied","Data":"5493e557a88508e84b49b46778e41dc53a2099eef7d8cec66afc31e7d1f36f33"} Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.558980 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5493e557a88508e84b49b46778e41dc53a2099eef7d8cec66afc31e7d1f36f33" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.559048 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mdwhd" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.708570 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz"] Jan 22 09:44:07 crc kubenswrapper[4892]: E0122 09:44:07.708918 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402a9581-6783-46e0-8147-2e443d9a0608" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.708930 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="402a9581-6783-46e0-8147-2e443d9a0608" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.709105 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="402a9581-6783-46e0-8147-2e443d9a0608" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.709703 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.711214 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.711535 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.711699 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.714981 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.723771 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz"] Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.804907 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.805004 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.805342 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsm6z\" (UniqueName: \"kubernetes.io/projected/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-kube-api-access-gsm6z\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.907524 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.907662 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsm6z\" (UniqueName: \"kubernetes.io/projected/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-kube-api-access-gsm6z\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.907772 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.911425 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.912335 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:07 crc kubenswrapper[4892]: I0122 09:44:07.932715 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsm6z\" (UniqueName: \"kubernetes.io/projected/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-kube-api-access-gsm6z\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:08 crc kubenswrapper[4892]: I0122 09:44:08.024746 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:44:08 crc kubenswrapper[4892]: I0122 09:44:08.552886 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz"] Jan 22 09:44:08 crc kubenswrapper[4892]: I0122 09:44:08.571786 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" event={"ID":"7ded4dd1-51b6-427d-8f8f-44da3828ef6b","Type":"ContainerStarted","Data":"f9c7e52268ad5e3aa8cb12fbfae8fa4bba3d347b791441071e708c1ef7fd9c40"} Jan 22 09:44:10 crc kubenswrapper[4892]: I0122 09:44:10.589142 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" event={"ID":"7ded4dd1-51b6-427d-8f8f-44da3828ef6b","Type":"ContainerStarted","Data":"865ae7697af2f0b273e49b8d3a1d48b20691ade15a874bc7c84e8f1658f3cc78"} Jan 22 09:44:10 crc kubenswrapper[4892]: I0122 09:44:10.609015 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" podStartSLOduration=2.581588056 podStartE2EDuration="3.608996738s" podCreationTimestamp="2026-01-22 09:44:07 +0000 UTC" firstStartedPulling="2026-01-22 09:44:08.558488419 +0000 UTC m=+2018.402567482" lastFinishedPulling="2026-01-22 09:44:09.585897091 +0000 UTC m=+2019.429976164" observedRunningTime="2026-01-22 09:44:10.601626086 +0000 UTC m=+2020.445705189" watchObservedRunningTime="2026-01-22 09:44:10.608996738 +0000 UTC m=+2020.453075791" Jan 22 09:44:59 crc kubenswrapper[4892]: I0122 09:44:59.203581 4892 generic.go:334] "Generic (PLEG): container finished" podID="7ded4dd1-51b6-427d-8f8f-44da3828ef6b" containerID="865ae7697af2f0b273e49b8d3a1d48b20691ade15a874bc7c84e8f1658f3cc78" exitCode=0 Jan 22 09:44:59 crc kubenswrapper[4892]: I0122 09:44:59.203675 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" event={"ID":"7ded4dd1-51b6-427d-8f8f-44da3828ef6b","Type":"ContainerDied","Data":"865ae7697af2f0b273e49b8d3a1d48b20691ade15a874bc7c84e8f1658f3cc78"} Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.150920 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc"] Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.152663 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.155018 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.155225 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.162340 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc"] Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.285222 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aac6c730-e7b9-47a7-965b-0c6cac408873-secret-volume\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.285324 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aac6c730-e7b9-47a7-965b-0c6cac408873-config-volume\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.285444 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96sx\" (UniqueName: \"kubernetes.io/projected/aac6c730-e7b9-47a7-965b-0c6cac408873-kube-api-access-m96sx\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.386730 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aac6c730-e7b9-47a7-965b-0c6cac408873-config-volume\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.386894 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96sx\" (UniqueName: \"kubernetes.io/projected/aac6c730-e7b9-47a7-965b-0c6cac408873-kube-api-access-m96sx\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.386988 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aac6c730-e7b9-47a7-965b-0c6cac408873-secret-volume\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.388231 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aac6c730-e7b9-47a7-965b-0c6cac408873-config-volume\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.395853 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aac6c730-e7b9-47a7-965b-0c6cac408873-secret-volume\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.410704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96sx\" (UniqueName: \"kubernetes.io/projected/aac6c730-e7b9-47a7-965b-0c6cac408873-kube-api-access-m96sx\") pod \"collect-profiles-29484585-fg5xc\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.480764 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.617249 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.695743 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsm6z\" (UniqueName: \"kubernetes.io/projected/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-kube-api-access-gsm6z\") pod \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.695889 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-ssh-key-openstack-edpm-ipam\") pod \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.695958 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-inventory\") pod \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\" (UID: \"7ded4dd1-51b6-427d-8f8f-44da3828ef6b\") " Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.705601 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-kube-api-access-gsm6z" (OuterVolumeSpecName: "kube-api-access-gsm6z") pod "7ded4dd1-51b6-427d-8f8f-44da3828ef6b" (UID: "7ded4dd1-51b6-427d-8f8f-44da3828ef6b"). InnerVolumeSpecName "kube-api-access-gsm6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.731864 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ded4dd1-51b6-427d-8f8f-44da3828ef6b" (UID: "7ded4dd1-51b6-427d-8f8f-44da3828ef6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.739642 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-inventory" (OuterVolumeSpecName: "inventory") pod "7ded4dd1-51b6-427d-8f8f-44da3828ef6b" (UID: "7ded4dd1-51b6-427d-8f8f-44da3828ef6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.798329 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsm6z\" (UniqueName: \"kubernetes.io/projected/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-kube-api-access-gsm6z\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.798369 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.798381 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ded4dd1-51b6-427d-8f8f-44da3828ef6b-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:00 crc kubenswrapper[4892]: I0122 09:45:00.982895 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc"] Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.221815 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" event={"ID":"aac6c730-e7b9-47a7-965b-0c6cac408873","Type":"ContainerStarted","Data":"feb0123d0fd2e5b095057567c622bfe039c58992d3cb0467c8b98c3fb345f749"} Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.223274 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" event={"ID":"7ded4dd1-51b6-427d-8f8f-44da3828ef6b","Type":"ContainerDied","Data":"f9c7e52268ad5e3aa8cb12fbfae8fa4bba3d347b791441071e708c1ef7fd9c40"} Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.223329 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9c7e52268ad5e3aa8cb12fbfae8fa4bba3d347b791441071e708c1ef7fd9c40" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.223402 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.309391 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l6df8"] Jan 22 09:45:01 crc kubenswrapper[4892]: E0122 09:45:01.310206 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ded4dd1-51b6-427d-8f8f-44da3828ef6b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.310228 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ded4dd1-51b6-427d-8f8f-44da3828ef6b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.310553 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ded4dd1-51b6-427d-8f8f-44da3828ef6b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.311389 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.313931 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.314485 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.314724 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.317540 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.328704 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l6df8"] Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.414125 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.414196 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.414234 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wv2n\" (UniqueName: \"kubernetes.io/projected/9486c77a-626c-488a-a958-d717027e31db-kube-api-access-9wv2n\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.517250 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.517410 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.518708 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wv2n\" (UniqueName: \"kubernetes.io/projected/9486c77a-626c-488a-a958-d717027e31db-kube-api-access-9wv2n\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.525493 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.535334 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.542049 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wv2n\" (UniqueName: \"kubernetes.io/projected/9486c77a-626c-488a-a958-d717027e31db-kube-api-access-9wv2n\") pod \"ssh-known-hosts-edpm-deployment-l6df8\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:01 crc kubenswrapper[4892]: I0122 09:45:01.644109 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:02 crc kubenswrapper[4892]: I0122 09:45:02.185442 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l6df8"] Jan 22 09:45:02 crc kubenswrapper[4892]: I0122 09:45:02.235168 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" event={"ID":"9486c77a-626c-488a-a958-d717027e31db","Type":"ContainerStarted","Data":"aac2e374a4c88c78912ac4320d43b3518e37982db6b8f38922ecd4c2c3a296ac"} Jan 22 09:45:02 crc kubenswrapper[4892]: I0122 09:45:02.237187 4892 generic.go:334] "Generic (PLEG): container finished" podID="aac6c730-e7b9-47a7-965b-0c6cac408873" containerID="b40dd2f4c05051f4a785c8b4f7602f97173de5ddcf0ddd8bd90692934c5061ab" exitCode=0 Jan 22 09:45:02 crc kubenswrapper[4892]: I0122 09:45:02.237242 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" event={"ID":"aac6c730-e7b9-47a7-965b-0c6cac408873","Type":"ContainerDied","Data":"b40dd2f4c05051f4a785c8b4f7602f97173de5ddcf0ddd8bd90692934c5061ab"} Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.581364 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.666197 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aac6c730-e7b9-47a7-965b-0c6cac408873-config-volume\") pod \"aac6c730-e7b9-47a7-965b-0c6cac408873\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.666376 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96sx\" (UniqueName: \"kubernetes.io/projected/aac6c730-e7b9-47a7-965b-0c6cac408873-kube-api-access-m96sx\") pod \"aac6c730-e7b9-47a7-965b-0c6cac408873\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.666422 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aac6c730-e7b9-47a7-965b-0c6cac408873-secret-volume\") pod \"aac6c730-e7b9-47a7-965b-0c6cac408873\" (UID: \"aac6c730-e7b9-47a7-965b-0c6cac408873\") " Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.666839 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac6c730-e7b9-47a7-965b-0c6cac408873-config-volume" (OuterVolumeSpecName: "config-volume") pod "aac6c730-e7b9-47a7-965b-0c6cac408873" (UID: "aac6c730-e7b9-47a7-965b-0c6cac408873"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.667022 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aac6c730-e7b9-47a7-965b-0c6cac408873-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.671962 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac6c730-e7b9-47a7-965b-0c6cac408873-kube-api-access-m96sx" (OuterVolumeSpecName: "kube-api-access-m96sx") pod "aac6c730-e7b9-47a7-965b-0c6cac408873" (UID: "aac6c730-e7b9-47a7-965b-0c6cac408873"). InnerVolumeSpecName "kube-api-access-m96sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.694547 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac6c730-e7b9-47a7-965b-0c6cac408873-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aac6c730-e7b9-47a7-965b-0c6cac408873" (UID: "aac6c730-e7b9-47a7-965b-0c6cac408873"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.769232 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aac6c730-e7b9-47a7-965b-0c6cac408873-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:03 crc kubenswrapper[4892]: I0122 09:45:03.769312 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96sx\" (UniqueName: \"kubernetes.io/projected/aac6c730-e7b9-47a7-965b-0c6cac408873-kube-api-access-m96sx\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:04 crc kubenswrapper[4892]: I0122 09:45:04.254240 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" event={"ID":"aac6c730-e7b9-47a7-965b-0c6cac408873","Type":"ContainerDied","Data":"feb0123d0fd2e5b095057567c622bfe039c58992d3cb0467c8b98c3fb345f749"} Jan 22 09:45:04 crc kubenswrapper[4892]: I0122 09:45:04.254523 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb0123d0fd2e5b095057567c622bfe039c58992d3cb0467c8b98c3fb345f749" Jan 22 09:45:04 crc kubenswrapper[4892]: I0122 09:45:04.254339 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc" Jan 22 09:45:04 crc kubenswrapper[4892]: I0122 09:45:04.664332 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w"] Jan 22 09:45:04 crc kubenswrapper[4892]: I0122 09:45:04.674971 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-bgd4w"] Jan 22 09:45:05 crc kubenswrapper[4892]: I0122 09:45:05.264233 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" event={"ID":"9486c77a-626c-488a-a958-d717027e31db","Type":"ContainerStarted","Data":"05c3868ab543d4f680148ba6904918978dff10e40fc10b9981bb40a2f5e1ae75"} Jan 22 09:45:05 crc kubenswrapper[4892]: I0122 09:45:05.284706 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" podStartSLOduration=1.523525088 podStartE2EDuration="4.284688956s" podCreationTimestamp="2026-01-22 09:45:01 +0000 UTC" firstStartedPulling="2026-01-22 09:45:02.191379685 +0000 UTC m=+2072.035458748" lastFinishedPulling="2026-01-22 09:45:04.952543553 +0000 UTC m=+2074.796622616" observedRunningTime="2026-01-22 09:45:05.280308517 +0000 UTC m=+2075.124387600" watchObservedRunningTime="2026-01-22 09:45:05.284688956 +0000 UTC m=+2075.128768019" Jan 22 09:45:05 crc kubenswrapper[4892]: I0122 09:45:05.430819 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665bbb60-4e79-4d6d-b805-1e03ef3442be" path="/var/lib/kubelet/pods/665bbb60-4e79-4d6d-b805-1e03ef3442be/volumes" Jan 22 09:45:12 crc kubenswrapper[4892]: I0122 09:45:12.331203 4892 generic.go:334] "Generic (PLEG): container finished" podID="9486c77a-626c-488a-a958-d717027e31db" containerID="05c3868ab543d4f680148ba6904918978dff10e40fc10b9981bb40a2f5e1ae75" exitCode=0 Jan 22 09:45:12 crc kubenswrapper[4892]: I0122 09:45:12.331325 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" event={"ID":"9486c77a-626c-488a-a958-d717027e31db","Type":"ContainerDied","Data":"05c3868ab543d4f680148ba6904918978dff10e40fc10b9981bb40a2f5e1ae75"} Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.831628 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.968122 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-inventory-0\") pod \"9486c77a-626c-488a-a958-d717027e31db\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.968254 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-ssh-key-openstack-edpm-ipam\") pod \"9486c77a-626c-488a-a958-d717027e31db\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.968348 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wv2n\" (UniqueName: \"kubernetes.io/projected/9486c77a-626c-488a-a958-d717027e31db-kube-api-access-9wv2n\") pod \"9486c77a-626c-488a-a958-d717027e31db\" (UID: \"9486c77a-626c-488a-a958-d717027e31db\") " Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.978339 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9486c77a-626c-488a-a958-d717027e31db-kube-api-access-9wv2n" (OuterVolumeSpecName: "kube-api-access-9wv2n") pod "9486c77a-626c-488a-a958-d717027e31db" (UID: "9486c77a-626c-488a-a958-d717027e31db"). InnerVolumeSpecName "kube-api-access-9wv2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.998957 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9486c77a-626c-488a-a958-d717027e31db" (UID: "9486c77a-626c-488a-a958-d717027e31db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:13 crc kubenswrapper[4892]: I0122 09:45:13.999208 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9486c77a-626c-488a-a958-d717027e31db" (UID: "9486c77a-626c-488a-a958-d717027e31db"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.070448 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.070492 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wv2n\" (UniqueName: \"kubernetes.io/projected/9486c77a-626c-488a-a958-d717027e31db-kube-api-access-9wv2n\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.070507 4892 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9486c77a-626c-488a-a958-d717027e31db-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.351269 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" event={"ID":"9486c77a-626c-488a-a958-d717027e31db","Type":"ContainerDied","Data":"aac2e374a4c88c78912ac4320d43b3518e37982db6b8f38922ecd4c2c3a296ac"} Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.351334 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac2e374a4c88c78912ac4320d43b3518e37982db6b8f38922ecd4c2c3a296ac" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.351411 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l6df8" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.446209 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w"] Jan 22 09:45:14 crc kubenswrapper[4892]: E0122 09:45:14.447031 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9486c77a-626c-488a-a958-d717027e31db" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.447053 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9486c77a-626c-488a-a958-d717027e31db" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:45:14 crc kubenswrapper[4892]: E0122 09:45:14.447083 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6c730-e7b9-47a7-965b-0c6cac408873" containerName="collect-profiles" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.447093 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6c730-e7b9-47a7-965b-0c6cac408873" containerName="collect-profiles" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.447304 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9486c77a-626c-488a-a958-d717027e31db" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.447327 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac6c730-e7b9-47a7-965b-0c6cac408873" containerName="collect-profiles" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.447983 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.450117 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.450398 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.450587 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.450788 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.456101 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w"] Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.584207 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.584590 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.584622 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb97\" (UniqueName: \"kubernetes.io/projected/fac9a973-588e-43e5-b6d1-530127ccccad-kube-api-access-8zb97\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.687121 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.687320 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.687394 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb97\" (UniqueName: \"kubernetes.io/projected/fac9a973-588e-43e5-b6d1-530127ccccad-kube-api-access-8zb97\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.692563 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.693917 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.705737 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb97\" (UniqueName: \"kubernetes.io/projected/fac9a973-588e-43e5-b6d1-530127ccccad-kube-api-access-8zb97\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8d76w\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:14 crc kubenswrapper[4892]: I0122 09:45:14.772177 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:15 crc kubenswrapper[4892]: I0122 09:45:15.283900 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w"] Jan 22 09:45:15 crc kubenswrapper[4892]: I0122 09:45:15.359163 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" event={"ID":"fac9a973-588e-43e5-b6d1-530127ccccad","Type":"ContainerStarted","Data":"2a417449846a7140891d27dc0489f455d804d33b8cfa941131a33b08f7c13b7e"} Jan 22 09:45:16 crc kubenswrapper[4892]: I0122 09:45:16.323469 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:45:16 crc kubenswrapper[4892]: I0122 09:45:16.323973 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:45:16 crc kubenswrapper[4892]: I0122 09:45:16.369113 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" event={"ID":"fac9a973-588e-43e5-b6d1-530127ccccad","Type":"ContainerStarted","Data":"2eb59106e19524e7da9373c56aa7a06ce902632233fcf30f82e6417457f22ae1"} Jan 22 09:45:16 crc kubenswrapper[4892]: I0122 09:45:16.386669 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" podStartSLOduration=1.934362547 podStartE2EDuration="2.386647602s" podCreationTimestamp="2026-01-22 09:45:14 +0000 UTC" firstStartedPulling="2026-01-22 09:45:15.284061666 +0000 UTC m=+2085.128140719" lastFinishedPulling="2026-01-22 09:45:15.736346721 +0000 UTC m=+2085.580425774" observedRunningTime="2026-01-22 09:45:16.382457509 +0000 UTC m=+2086.226536582" watchObservedRunningTime="2026-01-22 09:45:16.386647602 +0000 UTC m=+2086.230726675" Jan 22 09:45:24 crc kubenswrapper[4892]: I0122 09:45:24.440684 4892 generic.go:334] "Generic (PLEG): container finished" podID="fac9a973-588e-43e5-b6d1-530127ccccad" containerID="2eb59106e19524e7da9373c56aa7a06ce902632233fcf30f82e6417457f22ae1" exitCode=0 Jan 22 09:45:24 crc kubenswrapper[4892]: I0122 09:45:24.440868 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" event={"ID":"fac9a973-588e-43e5-b6d1-530127ccccad","Type":"ContainerDied","Data":"2eb59106e19524e7da9373c56aa7a06ce902632233fcf30f82e6417457f22ae1"} Jan 22 09:45:25 crc kubenswrapper[4892]: I0122 09:45:25.889865 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.049834 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-ssh-key-openstack-edpm-ipam\") pod \"fac9a973-588e-43e5-b6d1-530127ccccad\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.049949 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-inventory\") pod \"fac9a973-588e-43e5-b6d1-530127ccccad\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.050097 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zb97\" (UniqueName: \"kubernetes.io/projected/fac9a973-588e-43e5-b6d1-530127ccccad-kube-api-access-8zb97\") pod \"fac9a973-588e-43e5-b6d1-530127ccccad\" (UID: \"fac9a973-588e-43e5-b6d1-530127ccccad\") " Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.055834 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac9a973-588e-43e5-b6d1-530127ccccad-kube-api-access-8zb97" (OuterVolumeSpecName: "kube-api-access-8zb97") pod "fac9a973-588e-43e5-b6d1-530127ccccad" (UID: "fac9a973-588e-43e5-b6d1-530127ccccad"). InnerVolumeSpecName "kube-api-access-8zb97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.087544 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fac9a973-588e-43e5-b6d1-530127ccccad" (UID: "fac9a973-588e-43e5-b6d1-530127ccccad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.096727 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-inventory" (OuterVolumeSpecName: "inventory") pod "fac9a973-588e-43e5-b6d1-530127ccccad" (UID: "fac9a973-588e-43e5-b6d1-530127ccccad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.152237 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zb97\" (UniqueName: \"kubernetes.io/projected/fac9a973-588e-43e5-b6d1-530127ccccad-kube-api-access-8zb97\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.152497 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.152514 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac9a973-588e-43e5-b6d1-530127ccccad-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.469566 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" event={"ID":"fac9a973-588e-43e5-b6d1-530127ccccad","Type":"ContainerDied","Data":"2a417449846a7140891d27dc0489f455d804d33b8cfa941131a33b08f7c13b7e"} Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.469661 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a417449846a7140891d27dc0489f455d804d33b8cfa941131a33b08f7c13b7e" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.469661 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8d76w" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.549162 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54"] Jan 22 09:45:26 crc kubenswrapper[4892]: E0122 09:45:26.549743 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac9a973-588e-43e5-b6d1-530127ccccad" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.549770 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac9a973-588e-43e5-b6d1-530127ccccad" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.549975 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac9a973-588e-43e5-b6d1-530127ccccad" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.553163 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.555544 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.556031 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.556147 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.556264 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.583599 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54"] Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.676367 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.677117 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpw4h\" (UniqueName: \"kubernetes.io/projected/cc609164-f9fe-4caf-ae10-ed043d1091fe-kube-api-access-cpw4h\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.677417 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.779008 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.779068 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.779225 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpw4h\" (UniqueName: \"kubernetes.io/projected/cc609164-f9fe-4caf-ae10-ed043d1091fe-kube-api-access-cpw4h\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.784845 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.784983 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.796065 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpw4h\" (UniqueName: \"kubernetes.io/projected/cc609164-f9fe-4caf-ae10-ed043d1091fe-kube-api-access-cpw4h\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:26 crc kubenswrapper[4892]: I0122 09:45:26.885145 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:27 crc kubenswrapper[4892]: I0122 09:45:27.370503 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54"] Jan 22 09:45:27 crc kubenswrapper[4892]: I0122 09:45:27.478140 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" event={"ID":"cc609164-f9fe-4caf-ae10-ed043d1091fe","Type":"ContainerStarted","Data":"0d183b3121166c5cc876b810b1d39158ed2cc71c3da023b1cf2620c7feeae2d6"} Jan 22 09:45:31 crc kubenswrapper[4892]: I0122 09:45:31.516977 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" event={"ID":"cc609164-f9fe-4caf-ae10-ed043d1091fe","Type":"ContainerStarted","Data":"cfa0f59fe3550d551773bfe3a80a37b84d1dffd5c8544488bafa88022411e4d6"} Jan 22 09:45:31 crc kubenswrapper[4892]: I0122 09:45:31.536697 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" podStartSLOduration=2.98702002 podStartE2EDuration="5.536676021s" podCreationTimestamp="2026-01-22 09:45:26 +0000 UTC" firstStartedPulling="2026-01-22 09:45:27.378519513 +0000 UTC m=+2097.222598576" lastFinishedPulling="2026-01-22 09:45:29.928175514 +0000 UTC m=+2099.772254577" observedRunningTime="2026-01-22 09:45:31.533675627 +0000 UTC m=+2101.377754690" watchObservedRunningTime="2026-01-22 09:45:31.536676021 +0000 UTC m=+2101.380755084" Jan 22 09:45:38 crc kubenswrapper[4892]: I0122 09:45:38.646930 4892 scope.go:117] "RemoveContainer" containerID="e6910428801437b8c1abb965a14945a3975ce56eb8446d136960b8603089a37d" Jan 22 09:45:40 crc kubenswrapper[4892]: I0122 09:45:40.606611 4892 generic.go:334] "Generic (PLEG): container finished" podID="cc609164-f9fe-4caf-ae10-ed043d1091fe" containerID="cfa0f59fe3550d551773bfe3a80a37b84d1dffd5c8544488bafa88022411e4d6" exitCode=0 Jan 22 09:45:40 crc kubenswrapper[4892]: I0122 09:45:40.606695 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" event={"ID":"cc609164-f9fe-4caf-ae10-ed043d1091fe","Type":"ContainerDied","Data":"cfa0f59fe3550d551773bfe3a80a37b84d1dffd5c8544488bafa88022411e4d6"} Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.106409 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.196145 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-inventory\") pod \"cc609164-f9fe-4caf-ae10-ed043d1091fe\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.196488 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpw4h\" (UniqueName: \"kubernetes.io/projected/cc609164-f9fe-4caf-ae10-ed043d1091fe-kube-api-access-cpw4h\") pod \"cc609164-f9fe-4caf-ae10-ed043d1091fe\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.196635 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-ssh-key-openstack-edpm-ipam\") pod \"cc609164-f9fe-4caf-ae10-ed043d1091fe\" (UID: \"cc609164-f9fe-4caf-ae10-ed043d1091fe\") " Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.206401 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc609164-f9fe-4caf-ae10-ed043d1091fe-kube-api-access-cpw4h" (OuterVolumeSpecName: "kube-api-access-cpw4h") pod "cc609164-f9fe-4caf-ae10-ed043d1091fe" (UID: "cc609164-f9fe-4caf-ae10-ed043d1091fe"). InnerVolumeSpecName "kube-api-access-cpw4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.233164 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc609164-f9fe-4caf-ae10-ed043d1091fe" (UID: "cc609164-f9fe-4caf-ae10-ed043d1091fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.234525 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-inventory" (OuterVolumeSpecName: "inventory") pod "cc609164-f9fe-4caf-ae10-ed043d1091fe" (UID: "cc609164-f9fe-4caf-ae10-ed043d1091fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.299093 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpw4h\" (UniqueName: \"kubernetes.io/projected/cc609164-f9fe-4caf-ae10-ed043d1091fe-kube-api-access-cpw4h\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.299361 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.299373 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc609164-f9fe-4caf-ae10-ed043d1091fe-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.625382 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" event={"ID":"cc609164-f9fe-4caf-ae10-ed043d1091fe","Type":"ContainerDied","Data":"0d183b3121166c5cc876b810b1d39158ed2cc71c3da023b1cf2620c7feeae2d6"} Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.625427 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d183b3121166c5cc876b810b1d39158ed2cc71c3da023b1cf2620c7feeae2d6" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.625435 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.753872 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4"] Jan 22 09:45:42 crc kubenswrapper[4892]: E0122 09:45:42.754361 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc609164-f9fe-4caf-ae10-ed043d1091fe" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.754387 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc609164-f9fe-4caf-ae10-ed043d1091fe" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.754567 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc609164-f9fe-4caf-ae10-ed043d1091fe" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.755180 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.758020 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.758184 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.758790 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.759735 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.760265 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.760442 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.760626 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.761262 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.765251 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4"] Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910513 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910558 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910602 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910621 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvvc\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-kube-api-access-brvvc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910645 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910665 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910686 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910706 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910826 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.910968 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.911004 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.911035 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.911070 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:42 crc kubenswrapper[4892]: I0122 09:45:42.911191 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013002 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013063 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brvvc\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-kube-api-access-brvvc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013093 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013120 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013149 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013211 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013256 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013299 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013332 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013367 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013398 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013440 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.013461 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.017718 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.018027 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.018988 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.019171 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.019423 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.019668 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.019902 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.019958 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.020879 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.021115 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.021738 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.023936 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.032940 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.034621 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvvc\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-kube-api-access-brvvc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-276h4\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.082582 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.616776 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4"] Jan 22 09:45:43 crc kubenswrapper[4892]: W0122 09:45:43.619349 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f22ae69_a8dd_4646_836d_d48376094ceb.slice/crio-aea946fa8822d5781b308b4c86a7ea34446b850f4d0ecab6c6c9e7c70a59f495 WatchSource:0}: Error finding container aea946fa8822d5781b308b4c86a7ea34446b850f4d0ecab6c6c9e7c70a59f495: Status 404 returned error can't find the container with id aea946fa8822d5781b308b4c86a7ea34446b850f4d0ecab6c6c9e7c70a59f495 Jan 22 09:45:43 crc kubenswrapper[4892]: I0122 09:45:43.639607 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" event={"ID":"1f22ae69-a8dd-4646-836d-d48376094ceb","Type":"ContainerStarted","Data":"aea946fa8822d5781b308b4c86a7ea34446b850f4d0ecab6c6c9e7c70a59f495"} Jan 22 09:45:46 crc kubenswrapper[4892]: I0122 09:45:46.323307 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:45:46 crc kubenswrapper[4892]: I0122 09:45:46.324523 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:45:48 crc kubenswrapper[4892]: I0122 09:45:48.688067 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" event={"ID":"1f22ae69-a8dd-4646-836d-d48376094ceb","Type":"ContainerStarted","Data":"73462ad602bc16f760946c351c13d0d7b86e684b34c50d0f59764d07270e743f"} Jan 22 09:45:48 crc kubenswrapper[4892]: I0122 09:45:48.712831 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" podStartSLOduration=2.026529632 podStartE2EDuration="6.712805786s" podCreationTimestamp="2026-01-22 09:45:42 +0000 UTC" firstStartedPulling="2026-01-22 09:45:43.625695065 +0000 UTC m=+2113.469774148" lastFinishedPulling="2026-01-22 09:45:48.311971199 +0000 UTC m=+2118.156050302" observedRunningTime="2026-01-22 09:45:48.706334817 +0000 UTC m=+2118.550413900" watchObservedRunningTime="2026-01-22 09:45:48.712805786 +0000 UTC m=+2118.556884849" Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.322983 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.323642 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.323698 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.324571 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96413b14691349eddf0937a4dcff7c33027ccb1e003cd6440c2c014b672da430"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.324649 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://96413b14691349eddf0937a4dcff7c33027ccb1e003cd6440c2c014b672da430" gracePeriod=600 Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.945801 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="96413b14691349eddf0937a4dcff7c33027ccb1e003cd6440c2c014b672da430" exitCode=0 Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.945912 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"96413b14691349eddf0937a4dcff7c33027ccb1e003cd6440c2c014b672da430"} Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.946193 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860"} Jan 22 09:46:16 crc kubenswrapper[4892]: I0122 09:46:16.946220 4892 scope.go:117] "RemoveContainer" containerID="71f692d92f13e3b3fa0c90b264737d6f7079a49854fa5892e8209bda5b8f1922" Jan 22 09:46:29 crc kubenswrapper[4892]: E0122 09:46:29.634197 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f22ae69_a8dd_4646_836d_d48376094ceb.slice/crio-73462ad602bc16f760946c351c13d0d7b86e684b34c50d0f59764d07270e743f.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:46:30 crc kubenswrapper[4892]: I0122 09:46:30.065800 4892 generic.go:334] "Generic (PLEG): container finished" podID="1f22ae69-a8dd-4646-836d-d48376094ceb" containerID="73462ad602bc16f760946c351c13d0d7b86e684b34c50d0f59764d07270e743f" exitCode=0 Jan 22 09:46:30 crc kubenswrapper[4892]: I0122 09:46:30.065893 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" event={"ID":"1f22ae69-a8dd-4646-836d-d48376094ceb","Type":"ContainerDied","Data":"73462ad602bc16f760946c351c13d0d7b86e684b34c50d0f59764d07270e743f"} Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.560030 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629109 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-neutron-metadata-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629163 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-telemetry-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629234 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ovn-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629262 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ssh-key-openstack-edpm-ipam\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629306 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629344 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-repo-setup-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629364 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-bootstrap-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629698 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brvvc\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-kube-api-access-brvvc\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629743 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629801 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-nova-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629827 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-libvirt-combined-ca-bundle\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629873 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629918 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-inventory\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.629976 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1f22ae69-a8dd-4646-836d-d48376094ceb\" (UID: \"1f22ae69-a8dd-4646-836d-d48376094ceb\") " Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.637707 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.638230 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.639508 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.640760 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.641570 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.642803 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.643060 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-kube-api-access-brvvc" (OuterVolumeSpecName: "kube-api-access-brvvc") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "kube-api-access-brvvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.643474 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.643581 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.648457 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.650471 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.657340 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.674217 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.674322 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-inventory" (OuterVolumeSpecName: "inventory") pod "1f22ae69-a8dd-4646-836d-d48376094ceb" (UID: "1f22ae69-a8dd-4646-836d-d48376094ceb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732275 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732594 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732667 4892 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732727 4892 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732781 4892 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732867 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.732947 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733011 4892 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733066 4892 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733121 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brvvc\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-kube-api-access-brvvc\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733179 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733236 4892 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733313 4892 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f22ae69-a8dd-4646-836d-d48376094ceb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:31 crc kubenswrapper[4892]: I0122 09:46:31.733403 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1f22ae69-a8dd-4646-836d-d48376094ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.083394 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" event={"ID":"1f22ae69-a8dd-4646-836d-d48376094ceb","Type":"ContainerDied","Data":"aea946fa8822d5781b308b4c86a7ea34446b850f4d0ecab6c6c9e7c70a59f495"} Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.084173 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea946fa8822d5781b308b4c86a7ea34446b850f4d0ecab6c6c9e7c70a59f495" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.083724 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-276h4" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.186416 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8"] Jan 22 09:46:32 crc kubenswrapper[4892]: E0122 09:46:32.186863 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f22ae69-a8dd-4646-836d-d48376094ceb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.186885 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f22ae69-a8dd-4646-836d-d48376094ceb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.187082 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f22ae69-a8dd-4646-836d-d48376094ceb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.187743 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.190672 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.190779 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.191127 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.191171 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.191132 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.202039 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8"] Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.245088 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.245168 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vjc\" (UniqueName: \"kubernetes.io/projected/387b75ce-f980-4a8c-a230-15522ca7b923-kube-api-access-64vjc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.245205 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.245257 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/387b75ce-f980-4a8c-a230-15522ca7b923-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.245639 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.348117 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vjc\" (UniqueName: \"kubernetes.io/projected/387b75ce-f980-4a8c-a230-15522ca7b923-kube-api-access-64vjc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.348178 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.348231 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/387b75ce-f980-4a8c-a230-15522ca7b923-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.348332 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.348445 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.349709 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/387b75ce-f980-4a8c-a230-15522ca7b923-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.357811 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.359843 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.364141 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.368093 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vjc\" (UniqueName: \"kubernetes.io/projected/387b75ce-f980-4a8c-a230-15522ca7b923-kube-api-access-64vjc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqxm8\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:32 crc kubenswrapper[4892]: I0122 09:46:32.514507 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:46:33 crc kubenswrapper[4892]: I0122 09:46:33.100002 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8"] Jan 22 09:46:33 crc kubenswrapper[4892]: I0122 09:46:33.115776 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:46:34 crc kubenswrapper[4892]: I0122 09:46:34.109402 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" event={"ID":"387b75ce-f980-4a8c-a230-15522ca7b923","Type":"ContainerStarted","Data":"3ad0b213a93088d3ca988fe1868da7a989ba4c5d3b582db16c379b83b4b12582"} Jan 22 09:46:36 crc kubenswrapper[4892]: I0122 09:46:36.131959 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" event={"ID":"387b75ce-f980-4a8c-a230-15522ca7b923","Type":"ContainerStarted","Data":"f5fbb7e460581bc5e9ec6bf6c57bfa34d0a380611cefefe965ed98c60194e3a8"} Jan 22 09:46:36 crc kubenswrapper[4892]: I0122 09:46:36.155644 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" podStartSLOduration=2.244738011 podStartE2EDuration="4.1556244s" podCreationTimestamp="2026-01-22 09:46:32 +0000 UTC" firstStartedPulling="2026-01-22 09:46:33.115515722 +0000 UTC m=+2162.959594785" lastFinishedPulling="2026-01-22 09:46:35.026402111 +0000 UTC m=+2164.870481174" observedRunningTime="2026-01-22 09:46:36.149667402 +0000 UTC m=+2165.993746465" watchObservedRunningTime="2026-01-22 09:46:36.1556244 +0000 UTC m=+2165.999703453" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.742817 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dbd9"] Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.745602 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.769045 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dbd9"] Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.840759 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-utilities\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.840946 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-catalog-content\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.840990 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv62b\" (UniqueName: \"kubernetes.io/projected/d909a23a-0a19-4a3a-941e-d12389f6a602-kube-api-access-xv62b\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.942682 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-utilities\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.942774 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-catalog-content\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.942801 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv62b\" (UniqueName: \"kubernetes.io/projected/d909a23a-0a19-4a3a-941e-d12389f6a602-kube-api-access-xv62b\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.943625 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-utilities\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.943698 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-catalog-content\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:10 crc kubenswrapper[4892]: I0122 09:47:10.962800 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv62b\" (UniqueName: \"kubernetes.io/projected/d909a23a-0a19-4a3a-941e-d12389f6a602-kube-api-access-xv62b\") pod \"redhat-operators-4dbd9\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:11 crc kubenswrapper[4892]: I0122 09:47:11.072458 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:11 crc kubenswrapper[4892]: I0122 09:47:11.574677 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dbd9"] Jan 22 09:47:12 crc kubenswrapper[4892]: I0122 09:47:12.487873 4892 generic.go:334] "Generic (PLEG): container finished" podID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerID="77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397" exitCode=0 Jan 22 09:47:12 crc kubenswrapper[4892]: I0122 09:47:12.488206 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerDied","Data":"77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397"} Jan 22 09:47:12 crc kubenswrapper[4892]: I0122 09:47:12.488239 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerStarted","Data":"c34f34c0739a242ca3e60c788e4c8307c5acb10dd491643d37117193c4dcbda7"} Jan 22 09:47:13 crc kubenswrapper[4892]: I0122 09:47:13.498652 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerStarted","Data":"9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51"} Jan 22 09:47:14 crc kubenswrapper[4892]: I0122 09:47:14.512118 4892 generic.go:334] "Generic (PLEG): container finished" podID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerID="9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51" exitCode=0 Jan 22 09:47:14 crc kubenswrapper[4892]: I0122 09:47:14.512189 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerDied","Data":"9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51"} Jan 22 09:47:18 crc kubenswrapper[4892]: I0122 09:47:18.573787 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerStarted","Data":"56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0"} Jan 22 09:47:18 crc kubenswrapper[4892]: I0122 09:47:18.606710 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dbd9" podStartSLOduration=3.637527468 podStartE2EDuration="8.606684784s" podCreationTimestamp="2026-01-22 09:47:10 +0000 UTC" firstStartedPulling="2026-01-22 09:47:12.490352534 +0000 UTC m=+2202.334431597" lastFinishedPulling="2026-01-22 09:47:17.45950985 +0000 UTC m=+2207.303588913" observedRunningTime="2026-01-22 09:47:18.592167544 +0000 UTC m=+2208.436246607" watchObservedRunningTime="2026-01-22 09:47:18.606684784 +0000 UTC m=+2208.450763847" Jan 22 09:47:21 crc kubenswrapper[4892]: I0122 09:47:21.074268 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:21 crc kubenswrapper[4892]: I0122 09:47:21.074799 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:22 crc kubenswrapper[4892]: I0122 09:47:22.124592 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dbd9" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="registry-server" probeResult="failure" output=< Jan 22 09:47:22 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:47:22 crc kubenswrapper[4892]: > Jan 22 09:47:31 crc kubenswrapper[4892]: I0122 09:47:31.141781 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:31 crc kubenswrapper[4892]: I0122 09:47:31.198766 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:31 crc kubenswrapper[4892]: I0122 09:47:31.383457 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dbd9"] Jan 22 09:47:32 crc kubenswrapper[4892]: I0122 09:47:32.719752 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dbd9" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="registry-server" containerID="cri-o://56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0" gracePeriod=2 Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.672526 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.728707 4892 generic.go:334] "Generic (PLEG): container finished" podID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerID="56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0" exitCode=0 Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.728751 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerDied","Data":"56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0"} Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.728764 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dbd9" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.728777 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dbd9" event={"ID":"d909a23a-0a19-4a3a-941e-d12389f6a602","Type":"ContainerDied","Data":"c34f34c0739a242ca3e60c788e4c8307c5acb10dd491643d37117193c4dcbda7"} Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.728794 4892 scope.go:117] "RemoveContainer" containerID="56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.748616 4892 scope.go:117] "RemoveContainer" containerID="9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.767506 4892 scope.go:117] "RemoveContainer" containerID="77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.808178 4892 scope.go:117] "RemoveContainer" containerID="56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0" Jan 22 09:47:33 crc kubenswrapper[4892]: E0122 09:47:33.808668 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0\": container with ID starting with 56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0 not found: ID does not exist" containerID="56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.808700 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0"} err="failed to get container status \"56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0\": rpc error: code = NotFound desc = could not find container \"56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0\": container with ID starting with 56836645cf4d17f2a06e6ddbe78aa545eb70950b6cac89a55ca031a95c3d8cf0 not found: ID does not exist" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.808721 4892 scope.go:117] "RemoveContainer" containerID="9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51" Jan 22 09:47:33 crc kubenswrapper[4892]: E0122 09:47:33.809193 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51\": container with ID starting with 9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51 not found: ID does not exist" containerID="9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.809219 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51"} err="failed to get container status \"9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51\": rpc error: code = NotFound desc = could not find container \"9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51\": container with ID starting with 9a6d33a609adc692f114f485d0ed26338808ac3300f55d50e8dd14671dc8fc51 not found: ID does not exist" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.809238 4892 scope.go:117] "RemoveContainer" containerID="77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397" Jan 22 09:47:33 crc kubenswrapper[4892]: E0122 09:47:33.809550 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397\": container with ID starting with 77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397 not found: ID does not exist" containerID="77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.809579 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397"} err="failed to get container status \"77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397\": rpc error: code = NotFound desc = could not find container \"77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397\": container with ID starting with 77b76a054c312a39bc50050613c22770c5beaed5e8ad4824ba8029b866234397 not found: ID does not exist" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.817337 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-catalog-content\") pod \"d909a23a-0a19-4a3a-941e-d12389f6a602\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.817421 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv62b\" (UniqueName: \"kubernetes.io/projected/d909a23a-0a19-4a3a-941e-d12389f6a602-kube-api-access-xv62b\") pod \"d909a23a-0a19-4a3a-941e-d12389f6a602\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.817471 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-utilities\") pod \"d909a23a-0a19-4a3a-941e-d12389f6a602\" (UID: \"d909a23a-0a19-4a3a-941e-d12389f6a602\") " Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.819465 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-utilities" (OuterVolumeSpecName: "utilities") pod "d909a23a-0a19-4a3a-941e-d12389f6a602" (UID: "d909a23a-0a19-4a3a-941e-d12389f6a602"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.823943 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d909a23a-0a19-4a3a-941e-d12389f6a602-kube-api-access-xv62b" (OuterVolumeSpecName: "kube-api-access-xv62b") pod "d909a23a-0a19-4a3a-941e-d12389f6a602" (UID: "d909a23a-0a19-4a3a-941e-d12389f6a602"). InnerVolumeSpecName "kube-api-access-xv62b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.919648 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.919694 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv62b\" (UniqueName: \"kubernetes.io/projected/d909a23a-0a19-4a3a-941e-d12389f6a602-kube-api-access-xv62b\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:33 crc kubenswrapper[4892]: I0122 09:47:33.937914 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d909a23a-0a19-4a3a-941e-d12389f6a602" (UID: "d909a23a-0a19-4a3a-941e-d12389f6a602"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:47:34 crc kubenswrapper[4892]: I0122 09:47:34.021505 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d909a23a-0a19-4a3a-941e-d12389f6a602-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:34 crc kubenswrapper[4892]: I0122 09:47:34.064514 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dbd9"] Jan 22 09:47:34 crc kubenswrapper[4892]: I0122 09:47:34.071771 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dbd9"] Jan 22 09:47:35 crc kubenswrapper[4892]: I0122 09:47:35.431352 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" path="/var/lib/kubelet/pods/d909a23a-0a19-4a3a-941e-d12389f6a602/volumes" Jan 22 09:47:48 crc kubenswrapper[4892]: I0122 09:47:48.889548 4892 generic.go:334] "Generic (PLEG): container finished" podID="387b75ce-f980-4a8c-a230-15522ca7b923" containerID="f5fbb7e460581bc5e9ec6bf6c57bfa34d0a380611cefefe965ed98c60194e3a8" exitCode=0 Jan 22 09:47:48 crc kubenswrapper[4892]: I0122 09:47:48.889610 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" event={"ID":"387b75ce-f980-4a8c-a230-15522ca7b923","Type":"ContainerDied","Data":"f5fbb7e460581bc5e9ec6bf6c57bfa34d0a380611cefefe965ed98c60194e3a8"} Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.312208 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.404980 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/387b75ce-f980-4a8c-a230-15522ca7b923-ovncontroller-config-0\") pod \"387b75ce-f980-4a8c-a230-15522ca7b923\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.405047 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ovn-combined-ca-bundle\") pod \"387b75ce-f980-4a8c-a230-15522ca7b923\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.405067 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64vjc\" (UniqueName: \"kubernetes.io/projected/387b75ce-f980-4a8c-a230-15522ca7b923-kube-api-access-64vjc\") pod \"387b75ce-f980-4a8c-a230-15522ca7b923\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.405133 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-inventory\") pod \"387b75ce-f980-4a8c-a230-15522ca7b923\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.405165 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ssh-key-openstack-edpm-ipam\") pod \"387b75ce-f980-4a8c-a230-15522ca7b923\" (UID: \"387b75ce-f980-4a8c-a230-15522ca7b923\") " Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.412858 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387b75ce-f980-4a8c-a230-15522ca7b923-kube-api-access-64vjc" (OuterVolumeSpecName: "kube-api-access-64vjc") pod "387b75ce-f980-4a8c-a230-15522ca7b923" (UID: "387b75ce-f980-4a8c-a230-15522ca7b923"). InnerVolumeSpecName "kube-api-access-64vjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.413393 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "387b75ce-f980-4a8c-a230-15522ca7b923" (UID: "387b75ce-f980-4a8c-a230-15522ca7b923"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.430438 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387b75ce-f980-4a8c-a230-15522ca7b923-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "387b75ce-f980-4a8c-a230-15522ca7b923" (UID: "387b75ce-f980-4a8c-a230-15522ca7b923"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.439358 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-inventory" (OuterVolumeSpecName: "inventory") pod "387b75ce-f980-4a8c-a230-15522ca7b923" (UID: "387b75ce-f980-4a8c-a230-15522ca7b923"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.441852 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "387b75ce-f980-4a8c-a230-15522ca7b923" (UID: "387b75ce-f980-4a8c-a230-15522ca7b923"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.506103 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.506131 4892 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/387b75ce-f980-4a8c-a230-15522ca7b923-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.506141 4892 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.506150 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64vjc\" (UniqueName: \"kubernetes.io/projected/387b75ce-f980-4a8c-a230-15522ca7b923-kube-api-access-64vjc\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.506159 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b75ce-f980-4a8c-a230-15522ca7b923-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.917995 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" event={"ID":"387b75ce-f980-4a8c-a230-15522ca7b923","Type":"ContainerDied","Data":"3ad0b213a93088d3ca988fe1868da7a989ba4c5d3b582db16c379b83b4b12582"} Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.918751 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad0b213a93088d3ca988fe1868da7a989ba4c5d3b582db16c379b83b4b12582" Jan 22 09:47:50 crc kubenswrapper[4892]: I0122 09:47:50.918214 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqxm8" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.009488 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq"] Jan 22 09:47:51 crc kubenswrapper[4892]: E0122 09:47:51.009885 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387b75ce-f980-4a8c-a230-15522ca7b923" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.009908 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="387b75ce-f980-4a8c-a230-15522ca7b923" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 09:47:51 crc kubenswrapper[4892]: E0122 09:47:51.009921 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="extract-content" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.009930 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="extract-content" Jan 22 09:47:51 crc kubenswrapper[4892]: E0122 09:47:51.009978 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="registry-server" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.009987 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="registry-server" Jan 22 09:47:51 crc kubenswrapper[4892]: E0122 09:47:51.010003 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="extract-utilities" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.010011 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="extract-utilities" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.010223 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="d909a23a-0a19-4a3a-941e-d12389f6a602" containerName="registry-server" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.010235 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="387b75ce-f980-4a8c-a230-15522ca7b923" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.010860 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.012937 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.013227 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.013507 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.013647 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.013789 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.015469 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.021404 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq"] Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.051578 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.051646 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz9wt\" (UniqueName: \"kubernetes.io/projected/e8f16545-12e1-4084-84f3-a3598a939eaf-kube-api-access-dz9wt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.051678 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.051707 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.051731 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.051903 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.154202 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.154260 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.154310 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.154357 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.154462 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.154676 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz9wt\" (UniqueName: \"kubernetes.io/projected/e8f16545-12e1-4084-84f3-a3598a939eaf-kube-api-access-dz9wt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.159688 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.159895 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.160314 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.163310 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.164179 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.177354 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz9wt\" (UniqueName: \"kubernetes.io/projected/e8f16545-12e1-4084-84f3-a3598a939eaf-kube-api-access-dz9wt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.364269 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.910470 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq"] Jan 22 09:47:51 crc kubenswrapper[4892]: I0122 09:47:51.930783 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" event={"ID":"e8f16545-12e1-4084-84f3-a3598a939eaf","Type":"ContainerStarted","Data":"d701571ffd42b94034400accfe54c464fb2f0424fa1c10828cdf70c1153d69a1"} Jan 22 09:47:53 crc kubenswrapper[4892]: I0122 09:47:53.959522 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" event={"ID":"e8f16545-12e1-4084-84f3-a3598a939eaf","Type":"ContainerStarted","Data":"9af37fddb9dc9170d040a321c4d107dfd8cc6a7d1d21ba491538596f9b1179d0"} Jan 22 09:47:53 crc kubenswrapper[4892]: I0122 09:47:53.992091 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" podStartSLOduration=3.208447465 podStartE2EDuration="3.992052924s" podCreationTimestamp="2026-01-22 09:47:50 +0000 UTC" firstStartedPulling="2026-01-22 09:47:51.920638039 +0000 UTC m=+2241.764717102" lastFinishedPulling="2026-01-22 09:47:52.704243498 +0000 UTC m=+2242.548322561" observedRunningTime="2026-01-22 09:47:53.98664027 +0000 UTC m=+2243.830719333" watchObservedRunningTime="2026-01-22 09:47:53.992052924 +0000 UTC m=+2243.836131997" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.224105 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cpjkm"] Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.228927 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.245882 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpjkm"] Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.333866 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-utilities\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.333986 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2kl\" (UniqueName: \"kubernetes.io/projected/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-kube-api-access-tk2kl\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.334048 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-catalog-content\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.436863 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-utilities\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.436980 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2kl\" (UniqueName: \"kubernetes.io/projected/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-kube-api-access-tk2kl\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.437030 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-catalog-content\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.437629 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-utilities\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.437647 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-catalog-content\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.463906 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2kl\" (UniqueName: \"kubernetes.io/projected/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-kube-api-access-tk2kl\") pod \"certified-operators-cpjkm\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:54 crc kubenswrapper[4892]: I0122 09:47:54.559869 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:47:55 crc kubenswrapper[4892]: I0122 09:47:55.237136 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpjkm"] Jan 22 09:47:55 crc kubenswrapper[4892]: W0122 09:47:55.245657 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0997bdf_abe5_4fbf_ba27_82e218f1fb95.slice/crio-92e480e140c08fb1ae2964c7d6fa5f9715580b4995cb05e5fc3e90f45ed73af2 WatchSource:0}: Error finding container 92e480e140c08fb1ae2964c7d6fa5f9715580b4995cb05e5fc3e90f45ed73af2: Status 404 returned error can't find the container with id 92e480e140c08fb1ae2964c7d6fa5f9715580b4995cb05e5fc3e90f45ed73af2 Jan 22 09:47:55 crc kubenswrapper[4892]: I0122 09:47:55.995719 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpjkm" event={"ID":"c0997bdf-abe5-4fbf-ba27-82e218f1fb95","Type":"ContainerStarted","Data":"92e480e140c08fb1ae2964c7d6fa5f9715580b4995cb05e5fc3e90f45ed73af2"} Jan 22 09:47:57 crc kubenswrapper[4892]: I0122 09:47:57.005216 4892 generic.go:334] "Generic (PLEG): container finished" podID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerID="bc92c64557580abdf3de9cbcba058cac6a675d1229dedf70259a32c4ec3f8967" exitCode=0 Jan 22 09:47:57 crc kubenswrapper[4892]: I0122 09:47:57.005327 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpjkm" event={"ID":"c0997bdf-abe5-4fbf-ba27-82e218f1fb95","Type":"ContainerDied","Data":"bc92c64557580abdf3de9cbcba058cac6a675d1229dedf70259a32c4ec3f8967"} Jan 22 09:48:03 crc kubenswrapper[4892]: I0122 09:48:03.060591 4892 generic.go:334] "Generic (PLEG): container finished" podID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerID="e15686fd76330ee4dfda5514030c2c31ebe239aa4874d7b7d8e48a7d25d7f59f" exitCode=0 Jan 22 09:48:03 crc kubenswrapper[4892]: I0122 09:48:03.060964 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpjkm" event={"ID":"c0997bdf-abe5-4fbf-ba27-82e218f1fb95","Type":"ContainerDied","Data":"e15686fd76330ee4dfda5514030c2c31ebe239aa4874d7b7d8e48a7d25d7f59f"} Jan 22 09:48:04 crc kubenswrapper[4892]: I0122 09:48:04.070703 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpjkm" event={"ID":"c0997bdf-abe5-4fbf-ba27-82e218f1fb95","Type":"ContainerStarted","Data":"f0f2dbe07a3178af7e5055365b8f3d5a97f01b67eb9069d9e3e6e47f4f5e2df1"} Jan 22 09:48:04 crc kubenswrapper[4892]: I0122 09:48:04.094712 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cpjkm" podStartSLOduration=3.483244854 podStartE2EDuration="10.094694148s" podCreationTimestamp="2026-01-22 09:47:54 +0000 UTC" firstStartedPulling="2026-01-22 09:47:57.007164803 +0000 UTC m=+2246.851243866" lastFinishedPulling="2026-01-22 09:48:03.618614087 +0000 UTC m=+2253.462693160" observedRunningTime="2026-01-22 09:48:04.089902219 +0000 UTC m=+2253.933981302" watchObservedRunningTime="2026-01-22 09:48:04.094694148 +0000 UTC m=+2253.938773211" Jan 22 09:48:04 crc kubenswrapper[4892]: I0122 09:48:04.560245 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:48:04 crc kubenswrapper[4892]: I0122 09:48:04.560328 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:48:05 crc kubenswrapper[4892]: I0122 09:48:05.610806 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cpjkm" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="registry-server" probeResult="failure" output=< Jan 22 09:48:05 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:48:05 crc kubenswrapper[4892]: > Jan 22 09:48:14 crc kubenswrapper[4892]: I0122 09:48:14.608599 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:48:14 crc kubenswrapper[4892]: I0122 09:48:14.660202 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:48:14 crc kubenswrapper[4892]: I0122 09:48:14.855068 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpjkm"] Jan 22 09:48:16 crc kubenswrapper[4892]: I0122 09:48:16.178133 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cpjkm" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="registry-server" containerID="cri-o://f0f2dbe07a3178af7e5055365b8f3d5a97f01b67eb9069d9e3e6e47f4f5e2df1" gracePeriod=2 Jan 22 09:48:16 crc kubenswrapper[4892]: I0122 09:48:16.323521 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:48:16 crc kubenswrapper[4892]: I0122 09:48:16.323920 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:48:18 crc kubenswrapper[4892]: I0122 09:48:18.198579 4892 generic.go:334] "Generic (PLEG): container finished" podID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerID="f0f2dbe07a3178af7e5055365b8f3d5a97f01b67eb9069d9e3e6e47f4f5e2df1" exitCode=0 Jan 22 09:48:18 crc kubenswrapper[4892]: I0122 09:48:18.198645 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpjkm" event={"ID":"c0997bdf-abe5-4fbf-ba27-82e218f1fb95","Type":"ContainerDied","Data":"f0f2dbe07a3178af7e5055365b8f3d5a97f01b67eb9069d9e3e6e47f4f5e2df1"} Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.472375 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.609277 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-catalog-content\") pod \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.609678 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2kl\" (UniqueName: \"kubernetes.io/projected/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-kube-api-access-tk2kl\") pod \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.609777 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-utilities\") pod \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\" (UID: \"c0997bdf-abe5-4fbf-ba27-82e218f1fb95\") " Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.611022 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-utilities" (OuterVolumeSpecName: "utilities") pod "c0997bdf-abe5-4fbf-ba27-82e218f1fb95" (UID: "c0997bdf-abe5-4fbf-ba27-82e218f1fb95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.620440 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-kube-api-access-tk2kl" (OuterVolumeSpecName: "kube-api-access-tk2kl") pod "c0997bdf-abe5-4fbf-ba27-82e218f1fb95" (UID: "c0997bdf-abe5-4fbf-ba27-82e218f1fb95"). InnerVolumeSpecName "kube-api-access-tk2kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.665158 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0997bdf-abe5-4fbf-ba27-82e218f1fb95" (UID: "c0997bdf-abe5-4fbf-ba27-82e218f1fb95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.711804 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.712028 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:21 crc kubenswrapper[4892]: I0122 09:48:21.712045 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2kl\" (UniqueName: \"kubernetes.io/projected/c0997bdf-abe5-4fbf-ba27-82e218f1fb95-kube-api-access-tk2kl\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.234947 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpjkm" event={"ID":"c0997bdf-abe5-4fbf-ba27-82e218f1fb95","Type":"ContainerDied","Data":"92e480e140c08fb1ae2964c7d6fa5f9715580b4995cb05e5fc3e90f45ed73af2"} Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.234987 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpjkm" Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.235004 4892 scope.go:117] "RemoveContainer" containerID="f0f2dbe07a3178af7e5055365b8f3d5a97f01b67eb9069d9e3e6e47f4f5e2df1" Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.265656 4892 scope.go:117] "RemoveContainer" containerID="e15686fd76330ee4dfda5514030c2c31ebe239aa4874d7b7d8e48a7d25d7f59f" Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.302617 4892 scope.go:117] "RemoveContainer" containerID="bc92c64557580abdf3de9cbcba058cac6a675d1229dedf70259a32c4ec3f8967" Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.307039 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpjkm"] Jan 22 09:48:22 crc kubenswrapper[4892]: I0122 09:48:22.316825 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cpjkm"] Jan 22 09:48:22 crc kubenswrapper[4892]: E0122 09:48:22.438513 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0997bdf_abe5_4fbf_ba27_82e218f1fb95.slice\": RecentStats: unable to find data in memory cache]" Jan 22 09:48:23 crc kubenswrapper[4892]: I0122 09:48:23.432591 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" path="/var/lib/kubelet/pods/c0997bdf-abe5-4fbf-ba27-82e218f1fb95/volumes" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.804812 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m27x7"] Jan 22 09:48:30 crc kubenswrapper[4892]: E0122 09:48:30.805693 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="extract-utilities" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.805707 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="extract-utilities" Jan 22 09:48:30 crc kubenswrapper[4892]: E0122 09:48:30.805724 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="registry-server" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.805729 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="registry-server" Jan 22 09:48:30 crc kubenswrapper[4892]: E0122 09:48:30.805758 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="extract-content" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.805764 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="extract-content" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.805933 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0997bdf-abe5-4fbf-ba27-82e218f1fb95" containerName="registry-server" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.807430 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.833856 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m27x7"] Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.854797 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-utilities\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.854847 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfps\" (UniqueName: \"kubernetes.io/projected/848fcaae-ee04-499d-820f-1aed021dd0bb-kube-api-access-9zfps\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.854871 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-catalog-content\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.957645 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-utilities\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.958048 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfps\" (UniqueName: \"kubernetes.io/projected/848fcaae-ee04-499d-820f-1aed021dd0bb-kube-api-access-9zfps\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.958079 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-catalog-content\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.958715 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-catalog-content\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.959043 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-utilities\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:30 crc kubenswrapper[4892]: I0122 09:48:30.979584 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfps\" (UniqueName: \"kubernetes.io/projected/848fcaae-ee04-499d-820f-1aed021dd0bb-kube-api-access-9zfps\") pod \"redhat-marketplace-m27x7\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:31 crc kubenswrapper[4892]: I0122 09:48:31.128222 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:31 crc kubenswrapper[4892]: I0122 09:48:31.586387 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m27x7"] Jan 22 09:48:32 crc kubenswrapper[4892]: I0122 09:48:32.334307 4892 generic.go:334] "Generic (PLEG): container finished" podID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerID="746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c" exitCode=0 Jan 22 09:48:32 crc kubenswrapper[4892]: I0122 09:48:32.334469 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m27x7" event={"ID":"848fcaae-ee04-499d-820f-1aed021dd0bb","Type":"ContainerDied","Data":"746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c"} Jan 22 09:48:32 crc kubenswrapper[4892]: I0122 09:48:32.335499 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m27x7" event={"ID":"848fcaae-ee04-499d-820f-1aed021dd0bb","Type":"ContainerStarted","Data":"ef570bc5ccf9157e0bc4fb7530c36647128e2026865c74077c33db56a90ecc16"} Jan 22 09:48:34 crc kubenswrapper[4892]: I0122 09:48:34.368999 4892 generic.go:334] "Generic (PLEG): container finished" podID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerID="5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11" exitCode=0 Jan 22 09:48:34 crc kubenswrapper[4892]: I0122 09:48:34.369108 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m27x7" event={"ID":"848fcaae-ee04-499d-820f-1aed021dd0bb","Type":"ContainerDied","Data":"5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11"} Jan 22 09:48:35 crc kubenswrapper[4892]: I0122 09:48:35.381822 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m27x7" event={"ID":"848fcaae-ee04-499d-820f-1aed021dd0bb","Type":"ContainerStarted","Data":"6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd"} Jan 22 09:48:41 crc kubenswrapper[4892]: I0122 09:48:41.128410 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:41 crc kubenswrapper[4892]: I0122 09:48:41.129331 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:41 crc kubenswrapper[4892]: I0122 09:48:41.183390 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:41 crc kubenswrapper[4892]: I0122 09:48:41.215416 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m27x7" podStartSLOduration=8.613297861 podStartE2EDuration="11.21538411s" podCreationTimestamp="2026-01-22 09:48:30 +0000 UTC" firstStartedPulling="2026-01-22 09:48:32.336153877 +0000 UTC m=+2282.180232940" lastFinishedPulling="2026-01-22 09:48:34.938240126 +0000 UTC m=+2284.782319189" observedRunningTime="2026-01-22 09:48:35.4087498 +0000 UTC m=+2285.252828863" watchObservedRunningTime="2026-01-22 09:48:41.21538411 +0000 UTC m=+2291.059463173" Jan 22 09:48:41 crc kubenswrapper[4892]: I0122 09:48:41.481698 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:41 crc kubenswrapper[4892]: I0122 09:48:41.529168 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m27x7"] Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.446802 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m27x7" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="registry-server" containerID="cri-o://6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd" gracePeriod=2 Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.963106 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.988036 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zfps\" (UniqueName: \"kubernetes.io/projected/848fcaae-ee04-499d-820f-1aed021dd0bb-kube-api-access-9zfps\") pod \"848fcaae-ee04-499d-820f-1aed021dd0bb\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.988193 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-catalog-content\") pod \"848fcaae-ee04-499d-820f-1aed021dd0bb\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.988406 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-utilities\") pod \"848fcaae-ee04-499d-820f-1aed021dd0bb\" (UID: \"848fcaae-ee04-499d-820f-1aed021dd0bb\") " Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.989609 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-utilities" (OuterVolumeSpecName: "utilities") pod "848fcaae-ee04-499d-820f-1aed021dd0bb" (UID: "848fcaae-ee04-499d-820f-1aed021dd0bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:43 crc kubenswrapper[4892]: I0122 09:48:43.999753 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848fcaae-ee04-499d-820f-1aed021dd0bb-kube-api-access-9zfps" (OuterVolumeSpecName: "kube-api-access-9zfps") pod "848fcaae-ee04-499d-820f-1aed021dd0bb" (UID: "848fcaae-ee04-499d-820f-1aed021dd0bb"). InnerVolumeSpecName "kube-api-access-9zfps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.016489 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "848fcaae-ee04-499d-820f-1aed021dd0bb" (UID: "848fcaae-ee04-499d-820f-1aed021dd0bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.090205 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.090255 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zfps\" (UniqueName: \"kubernetes.io/projected/848fcaae-ee04-499d-820f-1aed021dd0bb-kube-api-access-9zfps\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.090270 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848fcaae-ee04-499d-820f-1aed021dd0bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.459181 4892 generic.go:334] "Generic (PLEG): container finished" podID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerID="6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd" exitCode=0 Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.459274 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m27x7" event={"ID":"848fcaae-ee04-499d-820f-1aed021dd0bb","Type":"ContainerDied","Data":"6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd"} Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.459601 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m27x7" event={"ID":"848fcaae-ee04-499d-820f-1aed021dd0bb","Type":"ContainerDied","Data":"ef570bc5ccf9157e0bc4fb7530c36647128e2026865c74077c33db56a90ecc16"} Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.459621 4892 scope.go:117] "RemoveContainer" containerID="6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.459387 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m27x7" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.485754 4892 scope.go:117] "RemoveContainer" containerID="5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.511446 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m27x7"] Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.518533 4892 scope.go:117] "RemoveContainer" containerID="746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.519201 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m27x7"] Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.569832 4892 scope.go:117] "RemoveContainer" containerID="6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd" Jan 22 09:48:44 crc kubenswrapper[4892]: E0122 09:48:44.571773 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd\": container with ID starting with 6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd not found: ID does not exist" containerID="6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.571810 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd"} err="failed to get container status \"6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd\": rpc error: code = NotFound desc = could not find container \"6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd\": container with ID starting with 6b71a64d81e6b3d61f742749146c15406a4562150d806080fbf835efe4b508dd not found: ID does not exist" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.571836 4892 scope.go:117] "RemoveContainer" containerID="5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11" Jan 22 09:48:44 crc kubenswrapper[4892]: E0122 09:48:44.572250 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11\": container with ID starting with 5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11 not found: ID does not exist" containerID="5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.572311 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11"} err="failed to get container status \"5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11\": rpc error: code = NotFound desc = could not find container \"5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11\": container with ID starting with 5061735cbbc32df28d84a292e925f5a916a4201c293126e44bf97c043a506f11 not found: ID does not exist" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.572340 4892 scope.go:117] "RemoveContainer" containerID="746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c" Jan 22 09:48:44 crc kubenswrapper[4892]: E0122 09:48:44.573005 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c\": container with ID starting with 746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c not found: ID does not exist" containerID="746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c" Jan 22 09:48:44 crc kubenswrapper[4892]: I0122 09:48:44.573040 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c"} err="failed to get container status \"746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c\": rpc error: code = NotFound desc = could not find container \"746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c\": container with ID starting with 746cdb3d3d1989db8b0ceb6b7e57576041f3d5bb65b499602c311e38e2b6320c not found: ID does not exist" Jan 22 09:48:45 crc kubenswrapper[4892]: I0122 09:48:45.435206 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" path="/var/lib/kubelet/pods/848fcaae-ee04-499d-820f-1aed021dd0bb/volumes" Jan 22 09:48:46 crc kubenswrapper[4892]: I0122 09:48:46.323839 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:48:46 crc kubenswrapper[4892]: I0122 09:48:46.324273 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:48:49 crc kubenswrapper[4892]: I0122 09:48:49.522419 4892 generic.go:334] "Generic (PLEG): container finished" podID="e8f16545-12e1-4084-84f3-a3598a939eaf" containerID="9af37fddb9dc9170d040a321c4d107dfd8cc6a7d1d21ba491538596f9b1179d0" exitCode=0 Jan 22 09:48:49 crc kubenswrapper[4892]: I0122 09:48:49.522510 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" event={"ID":"e8f16545-12e1-4084-84f3-a3598a939eaf","Type":"ContainerDied","Data":"9af37fddb9dc9170d040a321c4d107dfd8cc6a7d1d21ba491538596f9b1179d0"} Jan 22 09:48:50 crc kubenswrapper[4892]: I0122 09:48:50.966178 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.142850 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-ssh-key-openstack-edpm-ipam\") pod \"e8f16545-12e1-4084-84f3-a3598a939eaf\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.142909 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-nova-metadata-neutron-config-0\") pod \"e8f16545-12e1-4084-84f3-a3598a939eaf\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.142930 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e8f16545-12e1-4084-84f3-a3598a939eaf\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.143007 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz9wt\" (UniqueName: \"kubernetes.io/projected/e8f16545-12e1-4084-84f3-a3598a939eaf-kube-api-access-dz9wt\") pod \"e8f16545-12e1-4084-84f3-a3598a939eaf\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.143132 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-metadata-combined-ca-bundle\") pod \"e8f16545-12e1-4084-84f3-a3598a939eaf\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.143190 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-inventory\") pod \"e8f16545-12e1-4084-84f3-a3598a939eaf\" (UID: \"e8f16545-12e1-4084-84f3-a3598a939eaf\") " Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.149086 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f16545-12e1-4084-84f3-a3598a939eaf-kube-api-access-dz9wt" (OuterVolumeSpecName: "kube-api-access-dz9wt") pod "e8f16545-12e1-4084-84f3-a3598a939eaf" (UID: "e8f16545-12e1-4084-84f3-a3598a939eaf"). InnerVolumeSpecName "kube-api-access-dz9wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.163658 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e8f16545-12e1-4084-84f3-a3598a939eaf" (UID: "e8f16545-12e1-4084-84f3-a3598a939eaf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.178063 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8f16545-12e1-4084-84f3-a3598a939eaf" (UID: "e8f16545-12e1-4084-84f3-a3598a939eaf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.179655 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e8f16545-12e1-4084-84f3-a3598a939eaf" (UID: "e8f16545-12e1-4084-84f3-a3598a939eaf"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.187681 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-inventory" (OuterVolumeSpecName: "inventory") pod "e8f16545-12e1-4084-84f3-a3598a939eaf" (UID: "e8f16545-12e1-4084-84f3-a3598a939eaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.192447 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e8f16545-12e1-4084-84f3-a3598a939eaf" (UID: "e8f16545-12e1-4084-84f3-a3598a939eaf"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.244986 4892 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.245025 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.245037 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.245047 4892 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.245056 4892 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f16545-12e1-4084-84f3-a3598a939eaf-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.245065 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz9wt\" (UniqueName: \"kubernetes.io/projected/e8f16545-12e1-4084-84f3-a3598a939eaf-kube-api-access-dz9wt\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.544919 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" event={"ID":"e8f16545-12e1-4084-84f3-a3598a939eaf","Type":"ContainerDied","Data":"d701571ffd42b94034400accfe54c464fb2f0424fa1c10828cdf70c1153d69a1"} Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.544981 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d701571ffd42b94034400accfe54c464fb2f0424fa1c10828cdf70c1153d69a1" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.545006 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.658210 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg"] Jan 22 09:48:51 crc kubenswrapper[4892]: E0122 09:48:51.658702 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f16545-12e1-4084-84f3-a3598a939eaf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.658727 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f16545-12e1-4084-84f3-a3598a939eaf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 09:48:51 crc kubenswrapper[4892]: E0122 09:48:51.658748 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="extract-utilities" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.658757 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="extract-utilities" Jan 22 09:48:51 crc kubenswrapper[4892]: E0122 09:48:51.658800 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="extract-content" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.658810 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="extract-content" Jan 22 09:48:51 crc kubenswrapper[4892]: E0122 09:48:51.658824 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="registry-server" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.658832 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="registry-server" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.659061 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="848fcaae-ee04-499d-820f-1aed021dd0bb" containerName="registry-server" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.659080 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f16545-12e1-4084-84f3-a3598a939eaf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.659892 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.671219 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.671395 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.671260 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.671309 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.678982 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.700662 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg"] Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.759017 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.759121 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.759244 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.759351 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26twr\" (UniqueName: \"kubernetes.io/projected/38fc771d-608b-4a8e-a7ec-7cfa932abc41-kube-api-access-26twr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.759786 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.861742 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.861809 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.861887 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.861930 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26twr\" (UniqueName: \"kubernetes.io/projected/38fc771d-608b-4a8e-a7ec-7cfa932abc41-kube-api-access-26twr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.861993 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.866835 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.867267 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.872173 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.875894 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:51 crc kubenswrapper[4892]: I0122 09:48:51.881751 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26twr\" (UniqueName: \"kubernetes.io/projected/38fc771d-608b-4a8e-a7ec-7cfa932abc41-kube-api-access-26twr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:52 crc kubenswrapper[4892]: I0122 09:48:52.002607 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:48:52 crc kubenswrapper[4892]: I0122 09:48:52.593526 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg"] Jan 22 09:48:53 crc kubenswrapper[4892]: I0122 09:48:53.570670 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" event={"ID":"38fc771d-608b-4a8e-a7ec-7cfa932abc41","Type":"ContainerStarted","Data":"6bad9b14026661ba0fb5b4147573287601099cb2da2fe04c92715097692c0704"} Jan 22 09:48:54 crc kubenswrapper[4892]: I0122 09:48:54.603662 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" event={"ID":"38fc771d-608b-4a8e-a7ec-7cfa932abc41","Type":"ContainerStarted","Data":"2e2df0547abdf50538832158f220e852cf43ba93889733496ee101ee1cb9a26c"} Jan 22 09:48:55 crc kubenswrapper[4892]: I0122 09:48:55.639824 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" podStartSLOduration=2.97088185 podStartE2EDuration="4.639800466s" podCreationTimestamp="2026-01-22 09:48:51 +0000 UTC" firstStartedPulling="2026-01-22 09:48:52.607101902 +0000 UTC m=+2302.451180965" lastFinishedPulling="2026-01-22 09:48:54.276020488 +0000 UTC m=+2304.120099581" observedRunningTime="2026-01-22 09:48:55.632513266 +0000 UTC m=+2305.476592329" watchObservedRunningTime="2026-01-22 09:48:55.639800466 +0000 UTC m=+2305.483879519" Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.323987 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.324708 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.324769 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.325596 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.325658 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" gracePeriod=600 Jan 22 09:49:16 crc kubenswrapper[4892]: E0122 09:49:16.460797 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.807563 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" exitCode=0 Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.807646 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860"} Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.807739 4892 scope.go:117] "RemoveContainer" containerID="96413b14691349eddf0937a4dcff7c33027ccb1e003cd6440c2c014b672da430" Jan 22 09:49:16 crc kubenswrapper[4892]: I0122 09:49:16.808501 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:49:16 crc kubenswrapper[4892]: E0122 09:49:16.809030 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:49:27 crc kubenswrapper[4892]: I0122 09:49:27.418942 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:49:27 crc kubenswrapper[4892]: E0122 09:49:27.419748 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:49:41 crc kubenswrapper[4892]: I0122 09:49:41.425978 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:49:41 crc kubenswrapper[4892]: E0122 09:49:41.435809 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:49:53 crc kubenswrapper[4892]: I0122 09:49:53.418607 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:49:53 crc kubenswrapper[4892]: E0122 09:49:53.419969 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:50:07 crc kubenswrapper[4892]: I0122 09:50:07.419518 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:50:07 crc kubenswrapper[4892]: E0122 09:50:07.420237 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:50:18 crc kubenswrapper[4892]: I0122 09:50:18.419641 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:50:18 crc kubenswrapper[4892]: E0122 09:50:18.420991 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:50:29 crc kubenswrapper[4892]: I0122 09:50:29.419126 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:50:29 crc kubenswrapper[4892]: E0122 09:50:29.420243 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:50:42 crc kubenswrapper[4892]: I0122 09:50:42.418725 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:50:42 crc kubenswrapper[4892]: E0122 09:50:42.419563 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.325842 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cbp7l"] Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.328252 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.342347 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbp7l"] Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.449350 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-utilities\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.449709 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-catalog-content\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.449858 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmv2z\" (UniqueName: \"kubernetes.io/projected/26337487-29c0-4942-bd5d-04963f7b8319-kube-api-access-jmv2z\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.551240 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-utilities\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.551436 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-catalog-content\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.551503 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmv2z\" (UniqueName: \"kubernetes.io/projected/26337487-29c0-4942-bd5d-04963f7b8319-kube-api-access-jmv2z\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.551793 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-utilities\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.552055 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-catalog-content\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.575381 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmv2z\" (UniqueName: \"kubernetes.io/projected/26337487-29c0-4942-bd5d-04963f7b8319-kube-api-access-jmv2z\") pod \"community-operators-cbp7l\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:44 crc kubenswrapper[4892]: I0122 09:50:44.658109 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:50:45 crc kubenswrapper[4892]: I0122 09:50:45.317853 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbp7l"] Jan 22 09:50:45 crc kubenswrapper[4892]: W0122 09:50:45.324863 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26337487_29c0_4942_bd5d_04963f7b8319.slice/crio-6d476e64fc77412d3c422f8c12ea57617d9883211447078ff1ea78149ea29ab1 WatchSource:0}: Error finding container 6d476e64fc77412d3c422f8c12ea57617d9883211447078ff1ea78149ea29ab1: Status 404 returned error can't find the container with id 6d476e64fc77412d3c422f8c12ea57617d9883211447078ff1ea78149ea29ab1 Jan 22 09:50:45 crc kubenswrapper[4892]: I0122 09:50:45.617856 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbp7l" event={"ID":"26337487-29c0-4942-bd5d-04963f7b8319","Type":"ContainerStarted","Data":"6d476e64fc77412d3c422f8c12ea57617d9883211447078ff1ea78149ea29ab1"} Jan 22 09:50:45 crc kubenswrapper[4892]: E0122 09:50:45.913847 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26337487_29c0_4942_bd5d_04963f7b8319.slice/crio-7f7faa2899613cd7a977693ddcaa0945e7aaf958380b1e0a6bd442fca5451aa5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26337487_29c0_4942_bd5d_04963f7b8319.slice/crio-conmon-7f7faa2899613cd7a977693ddcaa0945e7aaf958380b1e0a6bd442fca5451aa5.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:50:46 crc kubenswrapper[4892]: I0122 09:50:46.628390 4892 generic.go:334] "Generic (PLEG): container finished" podID="26337487-29c0-4942-bd5d-04963f7b8319" containerID="7f7faa2899613cd7a977693ddcaa0945e7aaf958380b1e0a6bd442fca5451aa5" exitCode=0 Jan 22 09:50:46 crc kubenswrapper[4892]: I0122 09:50:46.628432 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbp7l" event={"ID":"26337487-29c0-4942-bd5d-04963f7b8319","Type":"ContainerDied","Data":"7f7faa2899613cd7a977693ddcaa0945e7aaf958380b1e0a6bd442fca5451aa5"} Jan 22 09:50:51 crc kubenswrapper[4892]: I0122 09:50:51.682020 4892 generic.go:334] "Generic (PLEG): container finished" podID="26337487-29c0-4942-bd5d-04963f7b8319" containerID="ccb7e4945ca74ddd30f81ee8af62308758545342df988383e263605cd92d6ca9" exitCode=0 Jan 22 09:50:51 crc kubenswrapper[4892]: I0122 09:50:51.682145 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbp7l" event={"ID":"26337487-29c0-4942-bd5d-04963f7b8319","Type":"ContainerDied","Data":"ccb7e4945ca74ddd30f81ee8af62308758545342df988383e263605cd92d6ca9"} Jan 22 09:50:53 crc kubenswrapper[4892]: I0122 09:50:53.419095 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:50:53 crc kubenswrapper[4892]: E0122 09:50:53.419720 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:50:55 crc kubenswrapper[4892]: I0122 09:50:55.719219 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbp7l" event={"ID":"26337487-29c0-4942-bd5d-04963f7b8319","Type":"ContainerStarted","Data":"9e7cb2fc72a0947fae0dc518cca31749dec410507ffe8b298a43b1b7229cd45c"} Jan 22 09:50:55 crc kubenswrapper[4892]: I0122 09:50:55.746446 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cbp7l" podStartSLOduration=3.204474996 podStartE2EDuration="11.746422379s" podCreationTimestamp="2026-01-22 09:50:44 +0000 UTC" firstStartedPulling="2026-01-22 09:50:46.630075484 +0000 UTC m=+2416.474154547" lastFinishedPulling="2026-01-22 09:50:55.172022867 +0000 UTC m=+2425.016101930" observedRunningTime="2026-01-22 09:50:55.741075446 +0000 UTC m=+2425.585154509" watchObservedRunningTime="2026-01-22 09:50:55.746422379 +0000 UTC m=+2425.590501442" Jan 22 09:51:04 crc kubenswrapper[4892]: I0122 09:51:04.418824 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:51:04 crc kubenswrapper[4892]: E0122 09:51:04.419785 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:51:04 crc kubenswrapper[4892]: I0122 09:51:04.658927 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:51:04 crc kubenswrapper[4892]: I0122 09:51:04.658999 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:51:04 crc kubenswrapper[4892]: I0122 09:51:04.708656 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:51:04 crc kubenswrapper[4892]: I0122 09:51:04.858729 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:51:04 crc kubenswrapper[4892]: I0122 09:51:04.945639 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbp7l"] Jan 22 09:51:06 crc kubenswrapper[4892]: I0122 09:51:06.829390 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cbp7l" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="registry-server" containerID="cri-o://9e7cb2fc72a0947fae0dc518cca31749dec410507ffe8b298a43b1b7229cd45c" gracePeriod=2 Jan 22 09:51:07 crc kubenswrapper[4892]: I0122 09:51:07.846707 4892 generic.go:334] "Generic (PLEG): container finished" podID="26337487-29c0-4942-bd5d-04963f7b8319" containerID="9e7cb2fc72a0947fae0dc518cca31749dec410507ffe8b298a43b1b7229cd45c" exitCode=0 Jan 22 09:51:07 crc kubenswrapper[4892]: I0122 09:51:07.846773 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbp7l" event={"ID":"26337487-29c0-4942-bd5d-04963f7b8319","Type":"ContainerDied","Data":"9e7cb2fc72a0947fae0dc518cca31749dec410507ffe8b298a43b1b7229cd45c"} Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.376544 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.381846 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-catalog-content\") pod \"26337487-29c0-4942-bd5d-04963f7b8319\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.382001 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmv2z\" (UniqueName: \"kubernetes.io/projected/26337487-29c0-4942-bd5d-04963f7b8319-kube-api-access-jmv2z\") pod \"26337487-29c0-4942-bd5d-04963f7b8319\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.382104 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-utilities\") pod \"26337487-29c0-4942-bd5d-04963f7b8319\" (UID: \"26337487-29c0-4942-bd5d-04963f7b8319\") " Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.384013 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-utilities" (OuterVolumeSpecName: "utilities") pod "26337487-29c0-4942-bd5d-04963f7b8319" (UID: "26337487-29c0-4942-bd5d-04963f7b8319"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.388946 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26337487-29c0-4942-bd5d-04963f7b8319-kube-api-access-jmv2z" (OuterVolumeSpecName: "kube-api-access-jmv2z") pod "26337487-29c0-4942-bd5d-04963f7b8319" (UID: "26337487-29c0-4942-bd5d-04963f7b8319"). InnerVolumeSpecName "kube-api-access-jmv2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.456912 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26337487-29c0-4942-bd5d-04963f7b8319" (UID: "26337487-29c0-4942-bd5d-04963f7b8319"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.484617 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmv2z\" (UniqueName: \"kubernetes.io/projected/26337487-29c0-4942-bd5d-04963f7b8319-kube-api-access-jmv2z\") on node \"crc\" DevicePath \"\"" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.484662 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.484678 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26337487-29c0-4942-bd5d-04963f7b8319-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.868468 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbp7l" event={"ID":"26337487-29c0-4942-bd5d-04963f7b8319","Type":"ContainerDied","Data":"6d476e64fc77412d3c422f8c12ea57617d9883211447078ff1ea78149ea29ab1"} Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.868592 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbp7l" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.868788 4892 scope.go:117] "RemoveContainer" containerID="9e7cb2fc72a0947fae0dc518cca31749dec410507ffe8b298a43b1b7229cd45c" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.908075 4892 scope.go:117] "RemoveContainer" containerID="ccb7e4945ca74ddd30f81ee8af62308758545342df988383e263605cd92d6ca9" Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.916760 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbp7l"] Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.929346 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cbp7l"] Jan 22 09:51:08 crc kubenswrapper[4892]: I0122 09:51:08.934586 4892 scope.go:117] "RemoveContainer" containerID="7f7faa2899613cd7a977693ddcaa0945e7aaf958380b1e0a6bd442fca5451aa5" Jan 22 09:51:09 crc kubenswrapper[4892]: I0122 09:51:09.437232 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26337487-29c0-4942-bd5d-04963f7b8319" path="/var/lib/kubelet/pods/26337487-29c0-4942-bd5d-04963f7b8319/volumes" Jan 22 09:51:16 crc kubenswrapper[4892]: I0122 09:51:16.419145 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:51:16 crc kubenswrapper[4892]: E0122 09:51:16.419796 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:51:28 crc kubenswrapper[4892]: I0122 09:51:28.418768 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:51:28 crc kubenswrapper[4892]: E0122 09:51:28.419567 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:51:41 crc kubenswrapper[4892]: I0122 09:51:41.427971 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:51:41 crc kubenswrapper[4892]: E0122 09:51:41.429082 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:51:56 crc kubenswrapper[4892]: I0122 09:51:56.418627 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:51:56 crc kubenswrapper[4892]: E0122 09:51:56.419536 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:52:09 crc kubenswrapper[4892]: I0122 09:52:09.418614 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:52:09 crc kubenswrapper[4892]: E0122 09:52:09.419661 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:52:23 crc kubenswrapper[4892]: I0122 09:52:23.419474 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:52:23 crc kubenswrapper[4892]: E0122 09:52:23.422356 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:52:38 crc kubenswrapper[4892]: I0122 09:52:38.419007 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:52:38 crc kubenswrapper[4892]: E0122 09:52:38.421358 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:52:51 crc kubenswrapper[4892]: I0122 09:52:51.425034 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:52:51 crc kubenswrapper[4892]: E0122 09:52:51.425793 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:53:03 crc kubenswrapper[4892]: I0122 09:53:03.419426 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:53:03 crc kubenswrapper[4892]: E0122 09:53:03.420736 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:53:17 crc kubenswrapper[4892]: I0122 09:53:17.419625 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:53:17 crc kubenswrapper[4892]: E0122 09:53:17.420458 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:53:29 crc kubenswrapper[4892]: I0122 09:53:29.418833 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:53:29 crc kubenswrapper[4892]: E0122 09:53:29.420044 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:53:44 crc kubenswrapper[4892]: I0122 09:53:44.419353 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:53:44 crc kubenswrapper[4892]: E0122 09:53:44.423082 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:53:57 crc kubenswrapper[4892]: I0122 09:53:57.418495 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:53:57 crc kubenswrapper[4892]: E0122 09:53:57.419302 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:54:08 crc kubenswrapper[4892]: I0122 09:54:08.418772 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:54:08 crc kubenswrapper[4892]: E0122 09:54:08.419528 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 09:54:21 crc kubenswrapper[4892]: I0122 09:54:21.424637 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:54:22 crc kubenswrapper[4892]: I0122 09:54:22.643862 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"18b3137a2cc8ece0bcea98bac87e9ea542c23f80d68386bc5e1f9301d6a7852b"} Jan 22 09:54:23 crc kubenswrapper[4892]: I0122 09:54:23.652879 4892 generic.go:334] "Generic (PLEG): container finished" podID="38fc771d-608b-4a8e-a7ec-7cfa932abc41" containerID="2e2df0547abdf50538832158f220e852cf43ba93889733496ee101ee1cb9a26c" exitCode=0 Jan 22 09:54:23 crc kubenswrapper[4892]: I0122 09:54:23.652958 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" event={"ID":"38fc771d-608b-4a8e-a7ec-7cfa932abc41","Type":"ContainerDied","Data":"2e2df0547abdf50538832158f220e852cf43ba93889733496ee101ee1cb9a26c"} Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.122770 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.263975 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-secret-0\") pod \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.264039 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-combined-ca-bundle\") pod \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.264118 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26twr\" (UniqueName: \"kubernetes.io/projected/38fc771d-608b-4a8e-a7ec-7cfa932abc41-kube-api-access-26twr\") pod \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.264399 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-inventory\") pod \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.265105 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-ssh-key-openstack-edpm-ipam\") pod \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\" (UID: \"38fc771d-608b-4a8e-a7ec-7cfa932abc41\") " Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.269705 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fc771d-608b-4a8e-a7ec-7cfa932abc41-kube-api-access-26twr" (OuterVolumeSpecName: "kube-api-access-26twr") pod "38fc771d-608b-4a8e-a7ec-7cfa932abc41" (UID: "38fc771d-608b-4a8e-a7ec-7cfa932abc41"). InnerVolumeSpecName "kube-api-access-26twr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.272363 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "38fc771d-608b-4a8e-a7ec-7cfa932abc41" (UID: "38fc771d-608b-4a8e-a7ec-7cfa932abc41"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.292139 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-inventory" (OuterVolumeSpecName: "inventory") pod "38fc771d-608b-4a8e-a7ec-7cfa932abc41" (UID: "38fc771d-608b-4a8e-a7ec-7cfa932abc41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.293353 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "38fc771d-608b-4a8e-a7ec-7cfa932abc41" (UID: "38fc771d-608b-4a8e-a7ec-7cfa932abc41"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.297040 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38fc771d-608b-4a8e-a7ec-7cfa932abc41" (UID: "38fc771d-608b-4a8e-a7ec-7cfa932abc41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.368706 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.368754 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.368771 4892 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.368781 4892 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fc771d-608b-4a8e-a7ec-7cfa932abc41-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.368790 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26twr\" (UniqueName: \"kubernetes.io/projected/38fc771d-608b-4a8e-a7ec-7cfa932abc41-kube-api-access-26twr\") on node \"crc\" DevicePath \"\"" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.683024 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" event={"ID":"38fc771d-608b-4a8e-a7ec-7cfa932abc41","Type":"ContainerDied","Data":"6bad9b14026661ba0fb5b4147573287601099cb2da2fe04c92715097692c0704"} Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.683486 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bad9b14026661ba0fb5b4147573287601099cb2da2fe04c92715097692c0704" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.683094 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.788500 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw"] Jan 22 09:54:25 crc kubenswrapper[4892]: E0122 09:54:25.789070 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fc771d-608b-4a8e-a7ec-7cfa932abc41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.789109 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fc771d-608b-4a8e-a7ec-7cfa932abc41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 09:54:25 crc kubenswrapper[4892]: E0122 09:54:25.789139 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="registry-server" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.789150 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="registry-server" Jan 22 09:54:25 crc kubenswrapper[4892]: E0122 09:54:25.789184 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="extract-content" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.789195 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="extract-content" Jan 22 09:54:25 crc kubenswrapper[4892]: E0122 09:54:25.789220 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="extract-utilities" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.789229 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="extract-utilities" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.789498 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fc771d-608b-4a8e-a7ec-7cfa932abc41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.789543 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="26337487-29c0-4942-bd5d-04963f7b8319" containerName="registry-server" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.790513 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.792504 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.792913 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.793081 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.793266 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.793528 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.793719 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.793927 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.801465 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw"] Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.879653 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.879730 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.879762 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.879827 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.879879 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhlq4\" (UniqueName: \"kubernetes.io/projected/12c8d866-32b3-4952-bffa-4993dd9dede1-kube-api-access-dhlq4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.879971 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.880068 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.880110 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.880136 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980755 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980810 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980832 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980851 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980936 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980966 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.980985 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.981000 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.981017 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhlq4\" (UniqueName: \"kubernetes.io/projected/12c8d866-32b3-4952-bffa-4993dd9dede1-kube-api-access-dhlq4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.984748 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.984989 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.985381 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.985534 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.985635 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.985875 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.991834 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.997819 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:25 crc kubenswrapper[4892]: I0122 09:54:25.999433 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhlq4\" (UniqueName: \"kubernetes.io/projected/12c8d866-32b3-4952-bffa-4993dd9dede1-kube-api-access-dhlq4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vnhgw\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:26 crc kubenswrapper[4892]: I0122 09:54:26.117908 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:54:26 crc kubenswrapper[4892]: I0122 09:54:26.609969 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw"] Jan 22 09:54:26 crc kubenswrapper[4892]: I0122 09:54:26.615885 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:54:26 crc kubenswrapper[4892]: I0122 09:54:26.694460 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" event={"ID":"12c8d866-32b3-4952-bffa-4993dd9dede1","Type":"ContainerStarted","Data":"de7befb728cc78eb85eb717b67422c27d23f587c8d82af50a039fd5befa65480"} Jan 22 09:54:28 crc kubenswrapper[4892]: I0122 09:54:28.712836 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" event={"ID":"12c8d866-32b3-4952-bffa-4993dd9dede1","Type":"ContainerStarted","Data":"b0fb29b3e5f8f6b4fbe4b9bea96330f39a7d5203123a8fbe4ca15276cdfa86e2"} Jan 22 09:54:28 crc kubenswrapper[4892]: I0122 09:54:28.733616 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" podStartSLOduration=3.013465424 podStartE2EDuration="3.733597461s" podCreationTimestamp="2026-01-22 09:54:25 +0000 UTC" firstStartedPulling="2026-01-22 09:54:26.615622712 +0000 UTC m=+2636.459701775" lastFinishedPulling="2026-01-22 09:54:27.335754749 +0000 UTC m=+2637.179833812" observedRunningTime="2026-01-22 09:54:28.729694994 +0000 UTC m=+2638.573774067" watchObservedRunningTime="2026-01-22 09:54:28.733597461 +0000 UTC m=+2638.577676524" Jan 22 09:56:46 crc kubenswrapper[4892]: I0122 09:56:46.323473 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:56:46 crc kubenswrapper[4892]: I0122 09:56:46.324096 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:56:54 crc kubenswrapper[4892]: I0122 09:56:54.995064 4892 generic.go:334] "Generic (PLEG): container finished" podID="12c8d866-32b3-4952-bffa-4993dd9dede1" containerID="b0fb29b3e5f8f6b4fbe4b9bea96330f39a7d5203123a8fbe4ca15276cdfa86e2" exitCode=0 Jan 22 09:56:54 crc kubenswrapper[4892]: I0122 09:56:54.995149 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" event={"ID":"12c8d866-32b3-4952-bffa-4993dd9dede1","Type":"ContainerDied","Data":"b0fb29b3e5f8f6b4fbe4b9bea96330f39a7d5203123a8fbe4ca15276cdfa86e2"} Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.507750 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.685913 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhlq4\" (UniqueName: \"kubernetes.io/projected/12c8d866-32b3-4952-bffa-4993dd9dede1-kube-api-access-dhlq4\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.685979 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-0\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686050 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-0\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686123 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-extra-config-0\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686200 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-inventory\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686310 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-combined-ca-bundle\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686361 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-1\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686407 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-ssh-key-openstack-edpm-ipam\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.686427 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-1\") pod \"12c8d866-32b3-4952-bffa-4993dd9dede1\" (UID: \"12c8d866-32b3-4952-bffa-4993dd9dede1\") " Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.692444 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.692893 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c8d866-32b3-4952-bffa-4993dd9dede1-kube-api-access-dhlq4" (OuterVolumeSpecName: "kube-api-access-dhlq4") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "kube-api-access-dhlq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.715645 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.717784 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.720138 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.720682 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.721185 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-inventory" (OuterVolumeSpecName: "inventory") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.722160 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.723856 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "12c8d866-32b3-4952-bffa-4993dd9dede1" (UID: "12c8d866-32b3-4952-bffa-4993dd9dede1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788010 4892 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788336 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788345 4892 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788356 4892 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788390 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788399 4892 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788408 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhlq4\" (UniqueName: \"kubernetes.io/projected/12c8d866-32b3-4952-bffa-4993dd9dede1-kube-api-access-dhlq4\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788417 4892 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:56 crc kubenswrapper[4892]: I0122 09:56:56.788425 4892 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/12c8d866-32b3-4952-bffa-4993dd9dede1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.013067 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" event={"ID":"12c8d866-32b3-4952-bffa-4993dd9dede1","Type":"ContainerDied","Data":"de7befb728cc78eb85eb717b67422c27d23f587c8d82af50a039fd5befa65480"} Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.013108 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de7befb728cc78eb85eb717b67422c27d23f587c8d82af50a039fd5befa65480" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.013140 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vnhgw" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.211668 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w"] Jan 22 09:56:57 crc kubenswrapper[4892]: E0122 09:56:57.212053 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c8d866-32b3-4952-bffa-4993dd9dede1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.212078 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c8d866-32b3-4952-bffa-4993dd9dede1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.212368 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c8d866-32b3-4952-bffa-4993dd9dede1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.213207 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.215054 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.215231 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lbm5h" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.215271 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.216510 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.221172 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.226263 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w"] Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.302170 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcqm\" (UniqueName: \"kubernetes.io/projected/cdb08ec6-d82f-4ea7-b6af-170f51b46949-kube-api-access-fzcqm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.302556 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.302585 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.302729 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.302790 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.302833 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.303057 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.405919 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.405989 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.406024 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.406049 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.406084 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.406158 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.406200 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcqm\" (UniqueName: \"kubernetes.io/projected/cdb08ec6-d82f-4ea7-b6af-170f51b46949-kube-api-access-fzcqm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.412125 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.412216 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.412691 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.412956 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.413799 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.415905 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.432976 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcqm\" (UniqueName: \"kubernetes.io/projected/cdb08ec6-d82f-4ea7-b6af-170f51b46949-kube-api-access-fzcqm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:57 crc kubenswrapper[4892]: I0122 09:56:57.587244 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:56:58 crc kubenswrapper[4892]: I0122 09:56:58.145730 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w"] Jan 22 09:56:59 crc kubenswrapper[4892]: I0122 09:56:59.030656 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" event={"ID":"cdb08ec6-d82f-4ea7-b6af-170f51b46949","Type":"ContainerStarted","Data":"2e117b1a3bc9539f47dc720e9f630a418b4fa97bc5f07bd86c681d84c8a6f3e2"} Jan 22 09:56:59 crc kubenswrapper[4892]: I0122 09:56:59.032520 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" event={"ID":"cdb08ec6-d82f-4ea7-b6af-170f51b46949","Type":"ContainerStarted","Data":"3dd08d2c3879d04b31fd1dd88ecd94e45c899ecbef2687cae723596b679ad7e6"} Jan 22 09:57:00 crc kubenswrapper[4892]: I0122 09:57:00.062541 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" podStartSLOduration=2.56901942 podStartE2EDuration="3.062506623s" podCreationTimestamp="2026-01-22 09:56:57 +0000 UTC" firstStartedPulling="2026-01-22 09:56:58.175303188 +0000 UTC m=+2788.019382251" lastFinishedPulling="2026-01-22 09:56:58.668790391 +0000 UTC m=+2788.512869454" observedRunningTime="2026-01-22 09:57:00.059380905 +0000 UTC m=+2789.903459988" watchObservedRunningTime="2026-01-22 09:57:00.062506623 +0000 UTC m=+2789.906585686" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.198609 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bfq4s"] Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.201218 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.209216 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bfq4s"] Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.373806 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-utilities\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.373866 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-catalog-content\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.374155 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxl6\" (UniqueName: \"kubernetes.io/projected/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-kube-api-access-fjxl6\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.476309 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-catalog-content\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.476793 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxl6\" (UniqueName: \"kubernetes.io/projected/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-kube-api-access-fjxl6\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.477379 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-catalog-content\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.477736 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-utilities\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.477826 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-utilities\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.502788 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxl6\" (UniqueName: \"kubernetes.io/projected/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-kube-api-access-fjxl6\") pod \"redhat-operators-bfq4s\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:10 crc kubenswrapper[4892]: I0122 09:57:10.531720 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:11 crc kubenswrapper[4892]: I0122 09:57:11.071918 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bfq4s"] Jan 22 09:57:11 crc kubenswrapper[4892]: I0122 09:57:11.177856 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerStarted","Data":"f3dac88f93373e01962b11570b864ff50f0738b0b7702ce21693032581bbe03b"} Jan 22 09:57:12 crc kubenswrapper[4892]: I0122 09:57:12.187572 4892 generic.go:334] "Generic (PLEG): container finished" podID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerID="5113c3941071f439b966081550828fa91b554502d229da30124659e24ab060c5" exitCode=0 Jan 22 09:57:12 crc kubenswrapper[4892]: I0122 09:57:12.187675 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerDied","Data":"5113c3941071f439b966081550828fa91b554502d229da30124659e24ab060c5"} Jan 22 09:57:15 crc kubenswrapper[4892]: I0122 09:57:15.213377 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerStarted","Data":"00dc8ca5b54f6bd3333bd492604fd855320ba644b8ee944abe94455d0d9bba86"} Jan 22 09:57:16 crc kubenswrapper[4892]: I0122 09:57:16.226104 4892 generic.go:334] "Generic (PLEG): container finished" podID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerID="00dc8ca5b54f6bd3333bd492604fd855320ba644b8ee944abe94455d0d9bba86" exitCode=0 Jan 22 09:57:16 crc kubenswrapper[4892]: I0122 09:57:16.226217 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerDied","Data":"00dc8ca5b54f6bd3333bd492604fd855320ba644b8ee944abe94455d0d9bba86"} Jan 22 09:57:16 crc kubenswrapper[4892]: I0122 09:57:16.323866 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:57:16 crc kubenswrapper[4892]: I0122 09:57:16.323921 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:57:18 crc kubenswrapper[4892]: I0122 09:57:18.248103 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerStarted","Data":"dfa7f1ac444f1bbc6599f18bb1ec35b67738133e9dca9101bfbdb4d15bbb297b"} Jan 22 09:57:18 crc kubenswrapper[4892]: I0122 09:57:18.275171 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bfq4s" podStartSLOduration=3.535610344 podStartE2EDuration="8.275150595s" podCreationTimestamp="2026-01-22 09:57:10 +0000 UTC" firstStartedPulling="2026-01-22 09:57:12.189348287 +0000 UTC m=+2802.033427350" lastFinishedPulling="2026-01-22 09:57:16.928888528 +0000 UTC m=+2806.772967601" observedRunningTime="2026-01-22 09:57:18.27093804 +0000 UTC m=+2808.115017113" watchObservedRunningTime="2026-01-22 09:57:18.275150595 +0000 UTC m=+2808.119229658" Jan 22 09:57:20 crc kubenswrapper[4892]: I0122 09:57:20.531934 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:20 crc kubenswrapper[4892]: I0122 09:57:20.532460 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:21 crc kubenswrapper[4892]: I0122 09:57:21.586543 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bfq4s" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="registry-server" probeResult="failure" output=< Jan 22 09:57:21 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 09:57:21 crc kubenswrapper[4892]: > Jan 22 09:57:30 crc kubenswrapper[4892]: I0122 09:57:30.587231 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:30 crc kubenswrapper[4892]: I0122 09:57:30.638208 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:30 crc kubenswrapper[4892]: I0122 09:57:30.830511 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bfq4s"] Jan 22 09:57:32 crc kubenswrapper[4892]: I0122 09:57:32.379785 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bfq4s" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="registry-server" containerID="cri-o://dfa7f1ac444f1bbc6599f18bb1ec35b67738133e9dca9101bfbdb4d15bbb297b" gracePeriod=2 Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.393869 4892 generic.go:334] "Generic (PLEG): container finished" podID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerID="dfa7f1ac444f1bbc6599f18bb1ec35b67738133e9dca9101bfbdb4d15bbb297b" exitCode=0 Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.393910 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerDied","Data":"dfa7f1ac444f1bbc6599f18bb1ec35b67738133e9dca9101bfbdb4d15bbb297b"} Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.832514 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.904120 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjxl6\" (UniqueName: \"kubernetes.io/projected/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-kube-api-access-fjxl6\") pod \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.904314 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-catalog-content\") pod \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.904429 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-utilities\") pod \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\" (UID: \"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4\") " Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.905350 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-utilities" (OuterVolumeSpecName: "utilities") pod "5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" (UID: "5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:57:33 crc kubenswrapper[4892]: I0122 09:57:33.911550 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-kube-api-access-fjxl6" (OuterVolumeSpecName: "kube-api-access-fjxl6") pod "5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" (UID: "5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4"). InnerVolumeSpecName "kube-api-access-fjxl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.006965 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjxl6\" (UniqueName: \"kubernetes.io/projected/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-kube-api-access-fjxl6\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.007012 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.040538 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" (UID: "5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.111685 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.406912 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bfq4s" event={"ID":"5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4","Type":"ContainerDied","Data":"f3dac88f93373e01962b11570b864ff50f0738b0b7702ce21693032581bbe03b"} Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.406994 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bfq4s" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.406993 4892 scope.go:117] "RemoveContainer" containerID="dfa7f1ac444f1bbc6599f18bb1ec35b67738133e9dca9101bfbdb4d15bbb297b" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.446246 4892 scope.go:117] "RemoveContainer" containerID="00dc8ca5b54f6bd3333bd492604fd855320ba644b8ee944abe94455d0d9bba86" Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.469567 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bfq4s"] Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.481720 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bfq4s"] Jan 22 09:57:34 crc kubenswrapper[4892]: I0122 09:57:34.483562 4892 scope.go:117] "RemoveContainer" containerID="5113c3941071f439b966081550828fa91b554502d229da30124659e24ab060c5" Jan 22 09:57:35 crc kubenswrapper[4892]: I0122 09:57:35.432708 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" path="/var/lib/kubelet/pods/5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4/volumes" Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.324944 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.325597 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.325656 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.326412 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18b3137a2cc8ece0bcea98bac87e9ea542c23f80d68386bc5e1f9301d6a7852b"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.326460 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://18b3137a2cc8ece0bcea98bac87e9ea542c23f80d68386bc5e1f9301d6a7852b" gracePeriod=600 Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.536722 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="18b3137a2cc8ece0bcea98bac87e9ea542c23f80d68386bc5e1f9301d6a7852b" exitCode=0 Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.536745 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"18b3137a2cc8ece0bcea98bac87e9ea542c23f80d68386bc5e1f9301d6a7852b"} Jan 22 09:57:46 crc kubenswrapper[4892]: I0122 09:57:46.537097 4892 scope.go:117] "RemoveContainer" containerID="406d3f788a21c72def457de095745fb76f7479f38bdb157442777ae85d5e5860" Jan 22 09:57:47 crc kubenswrapper[4892]: I0122 09:57:47.551411 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06"} Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.945002 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-75cn7"] Jan 22 09:58:08 crc kubenswrapper[4892]: E0122 09:58:08.945939 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="extract-utilities" Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.945955 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="extract-utilities" Jan 22 09:58:08 crc kubenswrapper[4892]: E0122 09:58:08.945977 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="extract-content" Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.945983 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="extract-content" Jan 22 09:58:08 crc kubenswrapper[4892]: E0122 09:58:08.946008 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="registry-server" Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.946013 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="registry-server" Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.946168 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b693cc3-fb3b-47fa-a83e-aa35ac6fe9e4" containerName="registry-server" Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.947949 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:08 crc kubenswrapper[4892]: I0122 09:58:08.957000 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75cn7"] Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.083688 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4feacd-2fae-4242-81a6-2de47aca5dd7-utilities\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.083736 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4feacd-2fae-4242-81a6-2de47aca5dd7-catalog-content\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.083782 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnq4\" (UniqueName: \"kubernetes.io/projected/0e4feacd-2fae-4242-81a6-2de47aca5dd7-kube-api-access-fxnq4\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.185236 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4feacd-2fae-4242-81a6-2de47aca5dd7-utilities\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.185306 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4feacd-2fae-4242-81a6-2de47aca5dd7-catalog-content\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.185361 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnq4\" (UniqueName: \"kubernetes.io/projected/0e4feacd-2fae-4242-81a6-2de47aca5dd7-kube-api-access-fxnq4\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.185904 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4feacd-2fae-4242-81a6-2de47aca5dd7-utilities\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.186014 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4feacd-2fae-4242-81a6-2de47aca5dd7-catalog-content\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.208366 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnq4\" (UniqueName: \"kubernetes.io/projected/0e4feacd-2fae-4242-81a6-2de47aca5dd7-kube-api-access-fxnq4\") pod \"certified-operators-75cn7\" (UID: \"0e4feacd-2fae-4242-81a6-2de47aca5dd7\") " pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.279480 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:09 crc kubenswrapper[4892]: I0122 09:58:09.826894 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75cn7"] Jan 22 09:58:09 crc kubenswrapper[4892]: W0122 09:58:09.829874 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4feacd_2fae_4242_81a6_2de47aca5dd7.slice/crio-d6d09943e2d4ce8a53fdf9a9877c38fbec9f2d0d336b5fa08f110ca71c7e0905 WatchSource:0}: Error finding container d6d09943e2d4ce8a53fdf9a9877c38fbec9f2d0d336b5fa08f110ca71c7e0905: Status 404 returned error can't find the container with id d6d09943e2d4ce8a53fdf9a9877c38fbec9f2d0d336b5fa08f110ca71c7e0905 Jan 22 09:58:10 crc kubenswrapper[4892]: I0122 09:58:10.757204 4892 generic.go:334] "Generic (PLEG): container finished" podID="0e4feacd-2fae-4242-81a6-2de47aca5dd7" containerID="1b40e185527f497fee87a5a32fcfe8340a8bd087dbd2703f9567cad46674fa0c" exitCode=0 Jan 22 09:58:10 crc kubenswrapper[4892]: I0122 09:58:10.757482 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75cn7" event={"ID":"0e4feacd-2fae-4242-81a6-2de47aca5dd7","Type":"ContainerDied","Data":"1b40e185527f497fee87a5a32fcfe8340a8bd087dbd2703f9567cad46674fa0c"} Jan 22 09:58:10 crc kubenswrapper[4892]: I0122 09:58:10.757512 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75cn7" event={"ID":"0e4feacd-2fae-4242-81a6-2de47aca5dd7","Type":"ContainerStarted","Data":"d6d09943e2d4ce8a53fdf9a9877c38fbec9f2d0d336b5fa08f110ca71c7e0905"} Jan 22 09:58:21 crc kubenswrapper[4892]: I0122 09:58:21.851780 4892 generic.go:334] "Generic (PLEG): container finished" podID="0e4feacd-2fae-4242-81a6-2de47aca5dd7" containerID="b5c555ae13c6cafe35a733349968033311dbc36e7a15559150dad0f266e97140" exitCode=0 Jan 22 09:58:21 crc kubenswrapper[4892]: I0122 09:58:21.851903 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75cn7" event={"ID":"0e4feacd-2fae-4242-81a6-2de47aca5dd7","Type":"ContainerDied","Data":"b5c555ae13c6cafe35a733349968033311dbc36e7a15559150dad0f266e97140"} Jan 22 09:58:23 crc kubenswrapper[4892]: I0122 09:58:23.874723 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75cn7" event={"ID":"0e4feacd-2fae-4242-81a6-2de47aca5dd7","Type":"ContainerStarted","Data":"413076f42163a7fb79c1e52778616d6bd83352fc8cc7bbe1e1d7c6eeef2fd62f"} Jan 22 09:58:23 crc kubenswrapper[4892]: I0122 09:58:23.907957 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-75cn7" podStartSLOduration=3.966078924 podStartE2EDuration="15.907934871s" podCreationTimestamp="2026-01-22 09:58:08 +0000 UTC" firstStartedPulling="2026-01-22 09:58:10.7593033 +0000 UTC m=+2860.603382363" lastFinishedPulling="2026-01-22 09:58:22.701159247 +0000 UTC m=+2872.545238310" observedRunningTime="2026-01-22 09:58:23.901052589 +0000 UTC m=+2873.745131652" watchObservedRunningTime="2026-01-22 09:58:23.907934871 +0000 UTC m=+2873.752013934" Jan 22 09:58:29 crc kubenswrapper[4892]: I0122 09:58:29.280386 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:29 crc kubenswrapper[4892]: I0122 09:58:29.280975 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:29 crc kubenswrapper[4892]: I0122 09:58:29.340001 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.021963 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-75cn7" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.118508 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75cn7"] Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.217142 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8m4r"] Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.217462 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p8m4r" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="registry-server" containerID="cri-o://402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c" gracePeriod=2 Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.792136 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.910851 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-catalog-content\") pod \"5524172a-41d9-4206-b133-ff86aa15f588\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.911161 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-utilities\") pod \"5524172a-41d9-4206-b133-ff86aa15f588\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.911265 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htd9h\" (UniqueName: \"kubernetes.io/projected/5524172a-41d9-4206-b133-ff86aa15f588-kube-api-access-htd9h\") pod \"5524172a-41d9-4206-b133-ff86aa15f588\" (UID: \"5524172a-41d9-4206-b133-ff86aa15f588\") " Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.911918 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-utilities" (OuterVolumeSpecName: "utilities") pod "5524172a-41d9-4206-b133-ff86aa15f588" (UID: "5524172a-41d9-4206-b133-ff86aa15f588"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.928617 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5524172a-41d9-4206-b133-ff86aa15f588-kube-api-access-htd9h" (OuterVolumeSpecName: "kube-api-access-htd9h") pod "5524172a-41d9-4206-b133-ff86aa15f588" (UID: "5524172a-41d9-4206-b133-ff86aa15f588"). InnerVolumeSpecName "kube-api-access-htd9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.971600 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5524172a-41d9-4206-b133-ff86aa15f588" (UID: "5524172a-41d9-4206-b133-ff86aa15f588"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.982672 4892 generic.go:334] "Generic (PLEG): container finished" podID="5524172a-41d9-4206-b133-ff86aa15f588" containerID="402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c" exitCode=0 Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.983415 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8m4r" event={"ID":"5524172a-41d9-4206-b133-ff86aa15f588","Type":"ContainerDied","Data":"402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c"} Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.983485 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8m4r" event={"ID":"5524172a-41d9-4206-b133-ff86aa15f588","Type":"ContainerDied","Data":"5553c64301f308bdcf58cb97ec09050a521f7793571ddd4366ad545d6776cf18"} Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.983527 4892 scope.go:117] "RemoveContainer" containerID="402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c" Jan 22 09:58:30 crc kubenswrapper[4892]: I0122 09:58:30.983448 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8m4r" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.015110 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.015476 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htd9h\" (UniqueName: \"kubernetes.io/projected/5524172a-41d9-4206-b133-ff86aa15f588-kube-api-access-htd9h\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.015587 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5524172a-41d9-4206-b133-ff86aa15f588-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.015612 4892 scope.go:117] "RemoveContainer" containerID="af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.031648 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8m4r"] Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.049315 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p8m4r"] Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.051963 4892 scope.go:117] "RemoveContainer" containerID="ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.106387 4892 scope.go:117] "RemoveContainer" containerID="402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c" Jan 22 09:58:31 crc kubenswrapper[4892]: E0122 09:58:31.106998 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c\": container with ID starting with 402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c not found: ID does not exist" containerID="402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.107056 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c"} err="failed to get container status \"402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c\": rpc error: code = NotFound desc = could not find container \"402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c\": container with ID starting with 402b216e1e42cf81df925eceb7b218ca18832a1d5a1331bdd28d8838df972e4c not found: ID does not exist" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.107089 4892 scope.go:117] "RemoveContainer" containerID="af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e" Jan 22 09:58:31 crc kubenswrapper[4892]: E0122 09:58:31.108100 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e\": container with ID starting with af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e not found: ID does not exist" containerID="af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.108130 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e"} err="failed to get container status \"af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e\": rpc error: code = NotFound desc = could not find container \"af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e\": container with ID starting with af209143576344fbc6773417ff97ce84950cbf131c34866eda8baff74e4e921e not found: ID does not exist" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.108222 4892 scope.go:117] "RemoveContainer" containerID="ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba" Jan 22 09:58:31 crc kubenswrapper[4892]: E0122 09:58:31.108493 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba\": container with ID starting with ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba not found: ID does not exist" containerID="ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.108520 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba"} err="failed to get container status \"ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba\": rpc error: code = NotFound desc = could not find container \"ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba\": container with ID starting with ef9af9cea7e293fc025ec315f7617b248253b5991d07e41da41e89fb046516ba not found: ID does not exist" Jan 22 09:58:31 crc kubenswrapper[4892]: I0122 09:58:31.436439 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5524172a-41d9-4206-b133-ff86aa15f588" path="/var/lib/kubelet/pods/5524172a-41d9-4206-b133-ff86aa15f588/volumes" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.009169 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmlf7"] Jan 22 09:58:47 crc kubenswrapper[4892]: E0122 09:58:47.010177 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="registry-server" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.010197 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="registry-server" Jan 22 09:58:47 crc kubenswrapper[4892]: E0122 09:58:47.010217 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="extract-utilities" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.010225 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="extract-utilities" Jan 22 09:58:47 crc kubenswrapper[4892]: E0122 09:58:47.010265 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="extract-content" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.010274 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="extract-content" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.010507 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5524172a-41d9-4206-b133-ff86aa15f588" containerName="registry-server" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.012128 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.033192 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmlf7"] Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.140105 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-utilities\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.140187 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl85w\" (UniqueName: \"kubernetes.io/projected/7814aef5-0804-4f43-aa5b-8907ac4a8996-kube-api-access-pl85w\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.140383 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-catalog-content\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.241985 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-catalog-content\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.242182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-utilities\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.242254 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl85w\" (UniqueName: \"kubernetes.io/projected/7814aef5-0804-4f43-aa5b-8907ac4a8996-kube-api-access-pl85w\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.242580 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-catalog-content\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.242658 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-utilities\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.261729 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl85w\" (UniqueName: \"kubernetes.io/projected/7814aef5-0804-4f43-aa5b-8907ac4a8996-kube-api-access-pl85w\") pod \"redhat-marketplace-zmlf7\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.337802 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:58:47 crc kubenswrapper[4892]: I0122 09:58:47.857579 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmlf7"] Jan 22 09:58:48 crc kubenswrapper[4892]: I0122 09:58:48.125438 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerStarted","Data":"993eb0d6c85ccd1b73686572fd226e17832987f27fcc8ccdee732e393e66dc84"} Jan 22 09:58:51 crc kubenswrapper[4892]: I0122 09:58:51.151715 4892 generic.go:334] "Generic (PLEG): container finished" podID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerID="4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318" exitCode=0 Jan 22 09:58:51 crc kubenswrapper[4892]: I0122 09:58:51.151762 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerDied","Data":"4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318"} Jan 22 09:59:01 crc kubenswrapper[4892]: I0122 09:59:01.245528 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerStarted","Data":"e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5"} Jan 22 09:59:02 crc kubenswrapper[4892]: I0122 09:59:02.258624 4892 generic.go:334] "Generic (PLEG): container finished" podID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerID="e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5" exitCode=0 Jan 22 09:59:02 crc kubenswrapper[4892]: I0122 09:59:02.258687 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerDied","Data":"e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5"} Jan 22 09:59:04 crc kubenswrapper[4892]: I0122 09:59:04.280997 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerStarted","Data":"ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451"} Jan 22 09:59:05 crc kubenswrapper[4892]: I0122 09:59:05.322695 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmlf7" podStartSLOduration=6.830495627 podStartE2EDuration="19.322672377s" podCreationTimestamp="2026-01-22 09:58:46 +0000 UTC" firstStartedPulling="2026-01-22 09:58:51.154140107 +0000 UTC m=+2900.998219170" lastFinishedPulling="2026-01-22 09:59:03.646316857 +0000 UTC m=+2913.490395920" observedRunningTime="2026-01-22 09:59:05.312195676 +0000 UTC m=+2915.156274739" watchObservedRunningTime="2026-01-22 09:59:05.322672377 +0000 UTC m=+2915.166751440" Jan 22 09:59:07 crc kubenswrapper[4892]: I0122 09:59:07.338567 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:59:07 crc kubenswrapper[4892]: I0122 09:59:07.338928 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:59:07 crc kubenswrapper[4892]: I0122 09:59:07.383230 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:59:17 crc kubenswrapper[4892]: I0122 09:59:17.393621 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:59:17 crc kubenswrapper[4892]: I0122 09:59:17.476009 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmlf7"] Jan 22 09:59:17 crc kubenswrapper[4892]: I0122 09:59:17.476333 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zmlf7" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="registry-server" containerID="cri-o://ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451" gracePeriod=2 Jan 22 09:59:17 crc kubenswrapper[4892]: I0122 09:59:17.982189 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.065673 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl85w\" (UniqueName: \"kubernetes.io/projected/7814aef5-0804-4f43-aa5b-8907ac4a8996-kube-api-access-pl85w\") pod \"7814aef5-0804-4f43-aa5b-8907ac4a8996\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.065963 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-utilities\") pod \"7814aef5-0804-4f43-aa5b-8907ac4a8996\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.066018 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-catalog-content\") pod \"7814aef5-0804-4f43-aa5b-8907ac4a8996\" (UID: \"7814aef5-0804-4f43-aa5b-8907ac4a8996\") " Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.067683 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-utilities" (OuterVolumeSpecName: "utilities") pod "7814aef5-0804-4f43-aa5b-8907ac4a8996" (UID: "7814aef5-0804-4f43-aa5b-8907ac4a8996"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.068855 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.072316 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7814aef5-0804-4f43-aa5b-8907ac4a8996-kube-api-access-pl85w" (OuterVolumeSpecName: "kube-api-access-pl85w") pod "7814aef5-0804-4f43-aa5b-8907ac4a8996" (UID: "7814aef5-0804-4f43-aa5b-8907ac4a8996"). InnerVolumeSpecName "kube-api-access-pl85w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.091874 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7814aef5-0804-4f43-aa5b-8907ac4a8996" (UID: "7814aef5-0804-4f43-aa5b-8907ac4a8996"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.172391 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl85w\" (UniqueName: \"kubernetes.io/projected/7814aef5-0804-4f43-aa5b-8907ac4a8996-kube-api-access-pl85w\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.172862 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7814aef5-0804-4f43-aa5b-8907ac4a8996-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.429595 4892 generic.go:334] "Generic (PLEG): container finished" podID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerID="ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451" exitCode=0 Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.429679 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerDied","Data":"ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451"} Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.429768 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmlf7" event={"ID":"7814aef5-0804-4f43-aa5b-8907ac4a8996","Type":"ContainerDied","Data":"993eb0d6c85ccd1b73686572fd226e17832987f27fcc8ccdee732e393e66dc84"} Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.429801 4892 scope.go:117] "RemoveContainer" containerID="ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.429810 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmlf7" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.461574 4892 scope.go:117] "RemoveContainer" containerID="e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.473849 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmlf7"] Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.489072 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmlf7"] Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.504944 4892 scope.go:117] "RemoveContainer" containerID="4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.534855 4892 scope.go:117] "RemoveContainer" containerID="ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451" Jan 22 09:59:18 crc kubenswrapper[4892]: E0122 09:59:18.535417 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451\": container with ID starting with ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451 not found: ID does not exist" containerID="ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.535479 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451"} err="failed to get container status \"ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451\": rpc error: code = NotFound desc = could not find container \"ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451\": container with ID starting with ab54c3815ad66b5f3b1bcb400e20d76a64e8c4abdd35a014791bca135598e451 not found: ID does not exist" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.535510 4892 scope.go:117] "RemoveContainer" containerID="e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5" Jan 22 09:59:18 crc kubenswrapper[4892]: E0122 09:59:18.536208 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5\": container with ID starting with e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5 not found: ID does not exist" containerID="e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.536248 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5"} err="failed to get container status \"e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5\": rpc error: code = NotFound desc = could not find container \"e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5\": container with ID starting with e8a51b06293f7a02faaa9fa1daf7d80c85ed00a2fc748170e6c6485bedd0d5f5 not found: ID does not exist" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.536274 4892 scope.go:117] "RemoveContainer" containerID="4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318" Jan 22 09:59:18 crc kubenswrapper[4892]: E0122 09:59:18.536851 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318\": container with ID starting with 4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318 not found: ID does not exist" containerID="4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318" Jan 22 09:59:18 crc kubenswrapper[4892]: I0122 09:59:18.536883 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318"} err="failed to get container status \"4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318\": rpc error: code = NotFound desc = could not find container \"4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318\": container with ID starting with 4297fc1afd9dda131a87cedb12c4b27b067682ebe3a713f88decefc7a6b6d318 not found: ID does not exist" Jan 22 09:59:19 crc kubenswrapper[4892]: I0122 09:59:19.431169 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" path="/var/lib/kubelet/pods/7814aef5-0804-4f43-aa5b-8907ac4a8996/volumes" Jan 22 09:59:46 crc kubenswrapper[4892]: I0122 09:59:46.323230 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:59:46 crc kubenswrapper[4892]: I0122 09:59:46.323906 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:59:57 crc kubenswrapper[4892]: I0122 09:59:57.831000 4892 generic.go:334] "Generic (PLEG): container finished" podID="cdb08ec6-d82f-4ea7-b6af-170f51b46949" containerID="2e117b1a3bc9539f47dc720e9f630a418b4fa97bc5f07bd86c681d84c8a6f3e2" exitCode=0 Jan 22 09:59:57 crc kubenswrapper[4892]: I0122 09:59:57.831105 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" event={"ID":"cdb08ec6-d82f-4ea7-b6af-170f51b46949","Type":"ContainerDied","Data":"2e117b1a3bc9539f47dc720e9f630a418b4fa97bc5f07bd86c681d84c8a6f3e2"} Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.326851 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.434901 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-1\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.435440 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ssh-key-openstack-edpm-ipam\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.435571 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-2\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.435695 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-0\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.435797 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzcqm\" (UniqueName: \"kubernetes.io/projected/cdb08ec6-d82f-4ea7-b6af-170f51b46949-kube-api-access-fzcqm\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.435909 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-telemetry-combined-ca-bundle\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.436007 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-inventory\") pod \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\" (UID: \"cdb08ec6-d82f-4ea7-b6af-170f51b46949\") " Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.443172 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.443198 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb08ec6-d82f-4ea7-b6af-170f51b46949-kube-api-access-fzcqm" (OuterVolumeSpecName: "kube-api-access-fzcqm") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "kube-api-access-fzcqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.464059 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-inventory" (OuterVolumeSpecName: "inventory") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.468957 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.468977 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.478515 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.478555 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "cdb08ec6-d82f-4ea7-b6af-170f51b46949" (UID: "cdb08ec6-d82f-4ea7-b6af-170f51b46949"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539257 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539580 4892 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539606 4892 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539622 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzcqm\" (UniqueName: \"kubernetes.io/projected/cdb08ec6-d82f-4ea7-b6af-170f51b46949-kube-api-access-fzcqm\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539635 4892 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539646 4892 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.539744 4892 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cdb08ec6-d82f-4ea7-b6af-170f51b46949-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.847620 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" event={"ID":"cdb08ec6-d82f-4ea7-b6af-170f51b46949","Type":"ContainerDied","Data":"3dd08d2c3879d04b31fd1dd88ecd94e45c899ecbef2687cae723596b679ad7e6"} Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.847664 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd08d2c3879d04b31fd1dd88ecd94e45c899ecbef2687cae723596b679ad7e6" Jan 22 09:59:59 crc kubenswrapper[4892]: I0122 09:59:59.847704 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.145180 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt"] Jan 22 10:00:00 crc kubenswrapper[4892]: E0122 10:00:00.145944 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb08ec6-d82f-4ea7-b6af-170f51b46949" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.145966 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb08ec6-d82f-4ea7-b6af-170f51b46949" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 10:00:00 crc kubenswrapper[4892]: E0122 10:00:00.145991 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="registry-server" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.146000 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="registry-server" Jan 22 10:00:00 crc kubenswrapper[4892]: E0122 10:00:00.146016 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="extract-utilities" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.146024 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="extract-utilities" Jan 22 10:00:00 crc kubenswrapper[4892]: E0122 10:00:00.146042 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="extract-content" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.146048 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="extract-content" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.146221 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb08ec6-d82f-4ea7-b6af-170f51b46949" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.146256 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7814aef5-0804-4f43-aa5b-8907ac4a8996" containerName="registry-server" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.146901 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.149688 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.150789 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.197607 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt"] Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.251674 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bfa7568-0d72-47a7-a75d-1899735502f3-secret-volume\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.251790 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpz9\" (UniqueName: \"kubernetes.io/projected/0bfa7568-0d72-47a7-a75d-1899735502f3-kube-api-access-7dpz9\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.252024 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bfa7568-0d72-47a7-a75d-1899735502f3-config-volume\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.354632 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bfa7568-0d72-47a7-a75d-1899735502f3-config-volume\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.354806 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bfa7568-0d72-47a7-a75d-1899735502f3-secret-volume\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.354868 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpz9\" (UniqueName: \"kubernetes.io/projected/0bfa7568-0d72-47a7-a75d-1899735502f3-kube-api-access-7dpz9\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.355597 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bfa7568-0d72-47a7-a75d-1899735502f3-config-volume\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.359210 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bfa7568-0d72-47a7-a75d-1899735502f3-secret-volume\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.370723 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpz9\" (UniqueName: \"kubernetes.io/projected/0bfa7568-0d72-47a7-a75d-1899735502f3-kube-api-access-7dpz9\") pod \"collect-profiles-29484600-x2glt\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.505702 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:00 crc kubenswrapper[4892]: I0122 10:00:00.924680 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt"] Jan 22 10:00:01 crc kubenswrapper[4892]: I0122 10:00:01.867169 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" event={"ID":"0bfa7568-0d72-47a7-a75d-1899735502f3","Type":"ContainerStarted","Data":"e2401303a95cdde11630d996ace9dc13699f2859b318f85d25652a6ffd893a28"} Jan 22 10:00:02 crc kubenswrapper[4892]: I0122 10:00:02.879876 4892 generic.go:334] "Generic (PLEG): container finished" podID="0bfa7568-0d72-47a7-a75d-1899735502f3" containerID="7031e8af2b94114b9cd833adbc0e0642b8682532b4974e5630535b06d7c34ce7" exitCode=0 Jan 22 10:00:02 crc kubenswrapper[4892]: I0122 10:00:02.879954 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" event={"ID":"0bfa7568-0d72-47a7-a75d-1899735502f3","Type":"ContainerDied","Data":"7031e8af2b94114b9cd833adbc0e0642b8682532b4974e5630535b06d7c34ce7"} Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.236129 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.343496 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bfa7568-0d72-47a7-a75d-1899735502f3-secret-volume\") pod \"0bfa7568-0d72-47a7-a75d-1899735502f3\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.343781 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpz9\" (UniqueName: \"kubernetes.io/projected/0bfa7568-0d72-47a7-a75d-1899735502f3-kube-api-access-7dpz9\") pod \"0bfa7568-0d72-47a7-a75d-1899735502f3\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.343843 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bfa7568-0d72-47a7-a75d-1899735502f3-config-volume\") pod \"0bfa7568-0d72-47a7-a75d-1899735502f3\" (UID: \"0bfa7568-0d72-47a7-a75d-1899735502f3\") " Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.344528 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bfa7568-0d72-47a7-a75d-1899735502f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bfa7568-0d72-47a7-a75d-1899735502f3" (UID: "0bfa7568-0d72-47a7-a75d-1899735502f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.349571 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfa7568-0d72-47a7-a75d-1899735502f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bfa7568-0d72-47a7-a75d-1899735502f3" (UID: "0bfa7568-0d72-47a7-a75d-1899735502f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.350189 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bfa7568-0d72-47a7-a75d-1899735502f3-kube-api-access-7dpz9" (OuterVolumeSpecName: "kube-api-access-7dpz9") pod "0bfa7568-0d72-47a7-a75d-1899735502f3" (UID: "0bfa7568-0d72-47a7-a75d-1899735502f3"). InnerVolumeSpecName "kube-api-access-7dpz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.446105 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpz9\" (UniqueName: \"kubernetes.io/projected/0bfa7568-0d72-47a7-a75d-1899735502f3-kube-api-access-7dpz9\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.446148 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bfa7568-0d72-47a7-a75d-1899735502f3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.446165 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bfa7568-0d72-47a7-a75d-1899735502f3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.901707 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" event={"ID":"0bfa7568-0d72-47a7-a75d-1899735502f3","Type":"ContainerDied","Data":"e2401303a95cdde11630d996ace9dc13699f2859b318f85d25652a6ffd893a28"} Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.902074 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2401303a95cdde11630d996ace9dc13699f2859b318f85d25652a6ffd893a28" Jan 22 10:00:04 crc kubenswrapper[4892]: I0122 10:00:04.901824 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-x2glt" Jan 22 10:00:05 crc kubenswrapper[4892]: I0122 10:00:05.329376 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7"] Jan 22 10:00:05 crc kubenswrapper[4892]: I0122 10:00:05.336320 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-4s6c7"] Jan 22 10:00:05 crc kubenswrapper[4892]: I0122 10:00:05.428563 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7608641-d2e1-4f1f-9fce-cbc081c61ce9" path="/var/lib/kubelet/pods/f7608641-d2e1-4f1f-9fce-cbc081c61ce9/volumes" Jan 22 10:00:16 crc kubenswrapper[4892]: I0122 10:00:16.326113 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:00:16 crc kubenswrapper[4892]: I0122 10:00:16.326786 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:00:39 crc kubenswrapper[4892]: I0122 10:00:39.232913 4892 scope.go:117] "RemoveContainer" containerID="7f85f23900519e580bb95911a03bb89a8179e3d58a544b7fb16a16b6dad241dc" Jan 22 10:00:46 crc kubenswrapper[4892]: I0122 10:00:46.323724 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:00:46 crc kubenswrapper[4892]: I0122 10:00:46.324788 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:00:46 crc kubenswrapper[4892]: I0122 10:00:46.324874 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:00:46 crc kubenswrapper[4892]: I0122 10:00:46.326210 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:00:46 crc kubenswrapper[4892]: I0122 10:00:46.326307 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" gracePeriod=600 Jan 22 10:00:46 crc kubenswrapper[4892]: E0122 10:00:46.457834 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:00:47 crc kubenswrapper[4892]: I0122 10:00:47.311006 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" exitCode=0 Jan 22 10:00:47 crc kubenswrapper[4892]: I0122 10:00:47.311119 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06"} Jan 22 10:00:47 crc kubenswrapper[4892]: I0122 10:00:47.311613 4892 scope.go:117] "RemoveContainer" containerID="18b3137a2cc8ece0bcea98bac87e9ea542c23f80d68386bc5e1f9301d6a7852b" Jan 22 10:00:47 crc kubenswrapper[4892]: I0122 10:00:47.312495 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:00:47 crc kubenswrapper[4892]: E0122 10:00:47.313224 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:00:58 crc kubenswrapper[4892]: I0122 10:00:58.418645 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:00:58 crc kubenswrapper[4892]: E0122 10:00:58.419775 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.145759 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29484601-vb56j"] Jan 22 10:01:00 crc kubenswrapper[4892]: E0122 10:01:00.147777 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfa7568-0d72-47a7-a75d-1899735502f3" containerName="collect-profiles" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.147870 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfa7568-0d72-47a7-a75d-1899735502f3" containerName="collect-profiles" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.148154 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfa7568-0d72-47a7-a75d-1899735502f3" containerName="collect-profiles" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.148911 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.159031 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484601-vb56j"] Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.284217 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-fernet-keys\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.284669 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-combined-ca-bundle\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.284777 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-config-data\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.284953 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gnw\" (UniqueName: \"kubernetes.io/projected/675953a2-7c44-4857-a9f6-47dcb2049507-kube-api-access-k7gnw\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.386957 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-combined-ca-bundle\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.387041 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-config-data\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.387089 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gnw\" (UniqueName: \"kubernetes.io/projected/675953a2-7c44-4857-a9f6-47dcb2049507-kube-api-access-k7gnw\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.387121 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-fernet-keys\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.394383 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-combined-ca-bundle\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.395856 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-config-data\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.396233 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-fernet-keys\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.406240 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7gnw\" (UniqueName: \"kubernetes.io/projected/675953a2-7c44-4857-a9f6-47dcb2049507-kube-api-access-k7gnw\") pod \"keystone-cron-29484601-vb56j\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.467728 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.577762 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.579250 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.584053 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.584267 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.584460 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7c4lt" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.584614 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.591405 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693040 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97jrb\" (UniqueName: \"kubernetes.io/projected/13171535-bfb7-4114-884d-b9b031615de3-kube-api-access-97jrb\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693119 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693160 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693181 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-config-data\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693213 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693276 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.693335 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.694639 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.696522 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.799884 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.799964 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-config-data\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800008 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800068 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800139 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800220 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800328 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800541 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97jrb\" (UniqueName: \"kubernetes.io/projected/13171535-bfb7-4114-884d-b9b031615de3-kube-api-access-97jrb\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800620 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.800805 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.801042 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.801190 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.801381 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.801401 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-config-data\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.806297 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.811963 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.812270 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.817552 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97jrb\" (UniqueName: \"kubernetes.io/projected/13171535-bfb7-4114-884d-b9b031615de3-kube-api-access-97jrb\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.830932 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.918470 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 10:01:00 crc kubenswrapper[4892]: I0122 10:01:00.919189 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484601-vb56j"] Jan 22 10:01:01 crc kubenswrapper[4892]: I0122 10:01:01.359264 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 10:01:01 crc kubenswrapper[4892]: W0122 10:01:01.360752 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13171535_bfb7_4114_884d_b9b031615de3.slice/crio-782fd4812f414a1192bc43eb8731865d9633a31acedb60c8c21fff416cec2d0e WatchSource:0}: Error finding container 782fd4812f414a1192bc43eb8731865d9633a31acedb60c8c21fff416cec2d0e: Status 404 returned error can't find the container with id 782fd4812f414a1192bc43eb8731865d9633a31acedb60c8c21fff416cec2d0e Jan 22 10:01:01 crc kubenswrapper[4892]: I0122 10:01:01.362768 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:01:01 crc kubenswrapper[4892]: I0122 10:01:01.477655 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"13171535-bfb7-4114-884d-b9b031615de3","Type":"ContainerStarted","Data":"782fd4812f414a1192bc43eb8731865d9633a31acedb60c8c21fff416cec2d0e"} Jan 22 10:01:01 crc kubenswrapper[4892]: I0122 10:01:01.479436 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-vb56j" event={"ID":"675953a2-7c44-4857-a9f6-47dcb2049507","Type":"ContainerStarted","Data":"46ce1e27e7d96055c1a9900c6f500e9f999d8c691e2b9296ed2a57fd1a0740e3"} Jan 22 10:01:01 crc kubenswrapper[4892]: I0122 10:01:01.479469 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-vb56j" event={"ID":"675953a2-7c44-4857-a9f6-47dcb2049507","Type":"ContainerStarted","Data":"5bc73d026513f1daa56d4e46f66b042f87e2b4eff346c369500070fa5766a184"} Jan 22 10:01:01 crc kubenswrapper[4892]: I0122 10:01:01.497087 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29484601-vb56j" podStartSLOduration=1.497069453 podStartE2EDuration="1.497069453s" podCreationTimestamp="2026-01-22 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:01:01.495763101 +0000 UTC m=+3031.339842184" watchObservedRunningTime="2026-01-22 10:01:01.497069453 +0000 UTC m=+3031.341148516" Jan 22 10:01:03 crc kubenswrapper[4892]: I0122 10:01:03.497763 4892 generic.go:334] "Generic (PLEG): container finished" podID="675953a2-7c44-4857-a9f6-47dcb2049507" containerID="46ce1e27e7d96055c1a9900c6f500e9f999d8c691e2b9296ed2a57fd1a0740e3" exitCode=0 Jan 22 10:01:03 crc kubenswrapper[4892]: I0122 10:01:03.497855 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-vb56j" event={"ID":"675953a2-7c44-4857-a9f6-47dcb2049507","Type":"ContainerDied","Data":"46ce1e27e7d96055c1a9900c6f500e9f999d8c691e2b9296ed2a57fd1a0740e3"} Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.692901 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.829933 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7gnw\" (UniqueName: \"kubernetes.io/projected/675953a2-7c44-4857-a9f6-47dcb2049507-kube-api-access-k7gnw\") pod \"675953a2-7c44-4857-a9f6-47dcb2049507\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.830025 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-fernet-keys\") pod \"675953a2-7c44-4857-a9f6-47dcb2049507\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.830252 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-combined-ca-bundle\") pod \"675953a2-7c44-4857-a9f6-47dcb2049507\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.830390 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-config-data\") pod \"675953a2-7c44-4857-a9f6-47dcb2049507\" (UID: \"675953a2-7c44-4857-a9f6-47dcb2049507\") " Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.837542 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675953a2-7c44-4857-a9f6-47dcb2049507-kube-api-access-k7gnw" (OuterVolumeSpecName: "kube-api-access-k7gnw") pod "675953a2-7c44-4857-a9f6-47dcb2049507" (UID: "675953a2-7c44-4857-a9f6-47dcb2049507"). InnerVolumeSpecName "kube-api-access-k7gnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.857559 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "675953a2-7c44-4857-a9f6-47dcb2049507" (UID: "675953a2-7c44-4857-a9f6-47dcb2049507"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.859968 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "675953a2-7c44-4857-a9f6-47dcb2049507" (UID: "675953a2-7c44-4857-a9f6-47dcb2049507"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.885902 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-config-data" (OuterVolumeSpecName: "config-data") pod "675953a2-7c44-4857-a9f6-47dcb2049507" (UID: "675953a2-7c44-4857-a9f6-47dcb2049507"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.932747 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7gnw\" (UniqueName: \"kubernetes.io/projected/675953a2-7c44-4857-a9f6-47dcb2049507-kube-api-access-k7gnw\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.932783 4892 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.932796 4892 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:06 crc kubenswrapper[4892]: I0122 10:01:06.932805 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675953a2-7c44-4857-a9f6-47dcb2049507-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:07 crc kubenswrapper[4892]: I0122 10:01:07.546139 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-vb56j" event={"ID":"675953a2-7c44-4857-a9f6-47dcb2049507","Type":"ContainerDied","Data":"5bc73d026513f1daa56d4e46f66b042f87e2b4eff346c369500070fa5766a184"} Jan 22 10:01:07 crc kubenswrapper[4892]: I0122 10:01:07.546179 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc73d026513f1daa56d4e46f66b042f87e2b4eff346c369500070fa5766a184" Jan 22 10:01:07 crc kubenswrapper[4892]: I0122 10:01:07.546235 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-vb56j" Jan 22 10:01:10 crc kubenswrapper[4892]: I0122 10:01:10.419175 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:01:10 crc kubenswrapper[4892]: E0122 10:01:10.419976 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:01:22 crc kubenswrapper[4892]: I0122 10:01:22.419332 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:01:22 crc kubenswrapper[4892]: E0122 10:01:22.420192 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.754832 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9fbhc"] Jan 22 10:01:36 crc kubenswrapper[4892]: E0122 10:01:36.755869 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675953a2-7c44-4857-a9f6-47dcb2049507" containerName="keystone-cron" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.755886 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="675953a2-7c44-4857-a9f6-47dcb2049507" containerName="keystone-cron" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.756107 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="675953a2-7c44-4857-a9f6-47dcb2049507" containerName="keystone-cron" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.758185 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.764891 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fbhc"] Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.879493 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kh9\" (UniqueName: \"kubernetes.io/projected/58eb61a0-f599-4cdb-a15d-1cf02976a747-kube-api-access-45kh9\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.879704 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-catalog-content\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.879911 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-utilities\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.981747 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-utilities\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.981841 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45kh9\" (UniqueName: \"kubernetes.io/projected/58eb61a0-f599-4cdb-a15d-1cf02976a747-kube-api-access-45kh9\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.981901 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-catalog-content\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.982389 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-utilities\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:36 crc kubenswrapper[4892]: I0122 10:01:36.982418 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-catalog-content\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:37 crc kubenswrapper[4892]: I0122 10:01:37.003126 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45kh9\" (UniqueName: \"kubernetes.io/projected/58eb61a0-f599-4cdb-a15d-1cf02976a747-kube-api-access-45kh9\") pod \"community-operators-9fbhc\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:37 crc kubenswrapper[4892]: I0122 10:01:37.095097 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:37 crc kubenswrapper[4892]: I0122 10:01:37.419743 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:01:37 crc kubenswrapper[4892]: E0122 10:01:37.420096 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:01:44 crc kubenswrapper[4892]: E0122 10:01:44.711575 4892 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 22 10:01:44 crc kubenswrapper[4892]: E0122 10:01:44.712613 4892 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(13171535-bfb7-4114-884d-b9b031615de3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:01:44 crc kubenswrapper[4892]: E0122 10:01:44.715185 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="13171535-bfb7-4114-884d-b9b031615de3" Jan 22 10:01:44 crc kubenswrapper[4892]: E0122 10:01:44.893711 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="13171535-bfb7-4114-884d-b9b031615de3" Jan 22 10:01:45 crc kubenswrapper[4892]: I0122 10:01:45.110418 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fbhc"] Jan 22 10:01:45 crc kubenswrapper[4892]: I0122 10:01:45.902261 4892 generic.go:334] "Generic (PLEG): container finished" podID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerID="6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82" exitCode=0 Jan 22 10:01:45 crc kubenswrapper[4892]: I0122 10:01:45.902425 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fbhc" event={"ID":"58eb61a0-f599-4cdb-a15d-1cf02976a747","Type":"ContainerDied","Data":"6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82"} Jan 22 10:01:45 crc kubenswrapper[4892]: I0122 10:01:45.903591 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fbhc" event={"ID":"58eb61a0-f599-4cdb-a15d-1cf02976a747","Type":"ContainerStarted","Data":"bf2bb99c4919645dd7f3e6656c8a9def01baa8acf68b916ce846d264d3745310"} Jan 22 10:01:46 crc kubenswrapper[4892]: I0122 10:01:46.913136 4892 generic.go:334] "Generic (PLEG): container finished" podID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerID="ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd" exitCode=0 Jan 22 10:01:46 crc kubenswrapper[4892]: I0122 10:01:46.913200 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fbhc" event={"ID":"58eb61a0-f599-4cdb-a15d-1cf02976a747","Type":"ContainerDied","Data":"ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd"} Jan 22 10:01:47 crc kubenswrapper[4892]: I0122 10:01:47.924886 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fbhc" event={"ID":"58eb61a0-f599-4cdb-a15d-1cf02976a747","Type":"ContainerStarted","Data":"05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d"} Jan 22 10:01:47 crc kubenswrapper[4892]: I0122 10:01:47.949584 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9fbhc" podStartSLOduration=10.447917219 podStartE2EDuration="11.949564175s" podCreationTimestamp="2026-01-22 10:01:36 +0000 UTC" firstStartedPulling="2026-01-22 10:01:45.904193881 +0000 UTC m=+3075.748272944" lastFinishedPulling="2026-01-22 10:01:47.405840837 +0000 UTC m=+3077.249919900" observedRunningTime="2026-01-22 10:01:47.943981835 +0000 UTC m=+3077.788060888" watchObservedRunningTime="2026-01-22 10:01:47.949564175 +0000 UTC m=+3077.793643238" Jan 22 10:01:51 crc kubenswrapper[4892]: I0122 10:01:51.426871 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:01:51 crc kubenswrapper[4892]: E0122 10:01:51.427618 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:01:57 crc kubenswrapper[4892]: I0122 10:01:57.095385 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:57 crc kubenswrapper[4892]: I0122 10:01:57.095943 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:57 crc kubenswrapper[4892]: I0122 10:01:57.138587 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:58 crc kubenswrapper[4892]: I0122 10:01:58.059210 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:01:58 crc kubenswrapper[4892]: I0122 10:01:58.105703 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9fbhc"] Jan 22 10:01:58 crc kubenswrapper[4892]: I0122 10:01:58.358856 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.033073 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"13171535-bfb7-4114-884d-b9b031615de3","Type":"ContainerStarted","Data":"e20e5b958b227c79c109817fc0b6e5f82b67a482502a8bf37f96d59d0a4232f6"} Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.033139 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9fbhc" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="registry-server" containerID="cri-o://05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d" gracePeriod=2 Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.077532 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.084544274 podStartE2EDuration="1m1.077513498s" podCreationTimestamp="2026-01-22 10:00:59 +0000 UTC" firstStartedPulling="2026-01-22 10:01:01.362550294 +0000 UTC m=+3031.206629357" lastFinishedPulling="2026-01-22 10:01:58.355519518 +0000 UTC m=+3088.199598581" observedRunningTime="2026-01-22 10:02:00.069046996 +0000 UTC m=+3089.913126079" watchObservedRunningTime="2026-01-22 10:02:00.077513498 +0000 UTC m=+3089.921592561" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.495925 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.539251 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-catalog-content\") pod \"58eb61a0-f599-4cdb-a15d-1cf02976a747\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.539506 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45kh9\" (UniqueName: \"kubernetes.io/projected/58eb61a0-f599-4cdb-a15d-1cf02976a747-kube-api-access-45kh9\") pod \"58eb61a0-f599-4cdb-a15d-1cf02976a747\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.539656 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-utilities\") pod \"58eb61a0-f599-4cdb-a15d-1cf02976a747\" (UID: \"58eb61a0-f599-4cdb-a15d-1cf02976a747\") " Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.540740 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-utilities" (OuterVolumeSpecName: "utilities") pod "58eb61a0-f599-4cdb-a15d-1cf02976a747" (UID: "58eb61a0-f599-4cdb-a15d-1cf02976a747"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.559416 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58eb61a0-f599-4cdb-a15d-1cf02976a747-kube-api-access-45kh9" (OuterVolumeSpecName: "kube-api-access-45kh9") pod "58eb61a0-f599-4cdb-a15d-1cf02976a747" (UID: "58eb61a0-f599-4cdb-a15d-1cf02976a747"). InnerVolumeSpecName "kube-api-access-45kh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.596469 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58eb61a0-f599-4cdb-a15d-1cf02976a747" (UID: "58eb61a0-f599-4cdb-a15d-1cf02976a747"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.643451 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.643500 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58eb61a0-f599-4cdb-a15d-1cf02976a747-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:00 crc kubenswrapper[4892]: I0122 10:02:00.643515 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45kh9\" (UniqueName: \"kubernetes.io/projected/58eb61a0-f599-4cdb-a15d-1cf02976a747-kube-api-access-45kh9\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.048094 4892 generic.go:334] "Generic (PLEG): container finished" podID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerID="05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d" exitCode=0 Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.048205 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fbhc" Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.048317 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fbhc" event={"ID":"58eb61a0-f599-4cdb-a15d-1cf02976a747","Type":"ContainerDied","Data":"05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d"} Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.048800 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fbhc" event={"ID":"58eb61a0-f599-4cdb-a15d-1cf02976a747","Type":"ContainerDied","Data":"bf2bb99c4919645dd7f3e6656c8a9def01baa8acf68b916ce846d264d3745310"} Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.048827 4892 scope.go:117] "RemoveContainer" containerID="05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d" Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.111011 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9fbhc"] Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.124333 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9fbhc"] Jan 22 10:02:01 crc kubenswrapper[4892]: I0122 10:02:01.433534 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" path="/var/lib/kubelet/pods/58eb61a0-f599-4cdb-a15d-1cf02976a747/volumes" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.142557 4892 scope.go:117] "RemoveContainer" containerID="ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.168155 4892 scope.go:117] "RemoveContainer" containerID="6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.212023 4892 scope.go:117] "RemoveContainer" containerID="05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d" Jan 22 10:02:02 crc kubenswrapper[4892]: E0122 10:02:02.212543 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d\": container with ID starting with 05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d not found: ID does not exist" containerID="05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.212584 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d"} err="failed to get container status \"05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d\": rpc error: code = NotFound desc = could not find container \"05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d\": container with ID starting with 05013fc5368c21c7d9ae81ff6cc20d47fa833664a7bc0ea90c89d6786515a91d not found: ID does not exist" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.212612 4892 scope.go:117] "RemoveContainer" containerID="ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd" Jan 22 10:02:02 crc kubenswrapper[4892]: E0122 10:02:02.212972 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd\": container with ID starting with ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd not found: ID does not exist" containerID="ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.213022 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd"} err="failed to get container status \"ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd\": rpc error: code = NotFound desc = could not find container \"ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd\": container with ID starting with ba750261dca455aaf09e460e6007a5a04b84053c8421876c6cf1a96d20a121fd not found: ID does not exist" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.213050 4892 scope.go:117] "RemoveContainer" containerID="6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82" Jan 22 10:02:02 crc kubenswrapper[4892]: E0122 10:02:02.213523 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82\": container with ID starting with 6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82 not found: ID does not exist" containerID="6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82" Jan 22 10:02:02 crc kubenswrapper[4892]: I0122 10:02:02.213549 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82"} err="failed to get container status \"6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82\": rpc error: code = NotFound desc = could not find container \"6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82\": container with ID starting with 6116d6f15aa72f0cd734ead89f9be7a56698864bf1cb45c39d9f1def9393bc82 not found: ID does not exist" Jan 22 10:02:06 crc kubenswrapper[4892]: I0122 10:02:06.423070 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:02:06 crc kubenswrapper[4892]: E0122 10:02:06.423986 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:02:19 crc kubenswrapper[4892]: I0122 10:02:19.418555 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:02:19 crc kubenswrapper[4892]: E0122 10:02:19.419303 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:02:33 crc kubenswrapper[4892]: I0122 10:02:33.428173 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:02:33 crc kubenswrapper[4892]: E0122 10:02:33.428913 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:02:47 crc kubenswrapper[4892]: I0122 10:02:47.418732 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:02:47 crc kubenswrapper[4892]: E0122 10:02:47.419568 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:02:59 crc kubenswrapper[4892]: I0122 10:02:59.420735 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:02:59 crc kubenswrapper[4892]: E0122 10:02:59.421687 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:03:14 crc kubenswrapper[4892]: I0122 10:03:14.418745 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:03:14 crc kubenswrapper[4892]: E0122 10:03:14.419522 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:03:28 crc kubenswrapper[4892]: I0122 10:03:28.418758 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:03:28 crc kubenswrapper[4892]: E0122 10:03:28.419691 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:03:42 crc kubenswrapper[4892]: I0122 10:03:42.419723 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:03:42 crc kubenswrapper[4892]: E0122 10:03:42.422348 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:03:54 crc kubenswrapper[4892]: I0122 10:03:54.418978 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:03:54 crc kubenswrapper[4892]: E0122 10:03:54.420168 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:04:09 crc kubenswrapper[4892]: I0122 10:04:09.419096 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:04:09 crc kubenswrapper[4892]: E0122 10:04:09.420000 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:04:20 crc kubenswrapper[4892]: I0122 10:04:20.419734 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:04:20 crc kubenswrapper[4892]: E0122 10:04:20.420926 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:04:31 crc kubenswrapper[4892]: I0122 10:04:31.428238 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:04:31 crc kubenswrapper[4892]: E0122 10:04:31.429413 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:04:43 crc kubenswrapper[4892]: I0122 10:04:43.419131 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:04:43 crc kubenswrapper[4892]: E0122 10:04:43.420187 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:04:58 crc kubenswrapper[4892]: I0122 10:04:58.418648 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:04:58 crc kubenswrapper[4892]: E0122 10:04:58.420334 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:05:09 crc kubenswrapper[4892]: I0122 10:05:09.419076 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:05:09 crc kubenswrapper[4892]: E0122 10:05:09.419949 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:05:22 crc kubenswrapper[4892]: I0122 10:05:22.419488 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:05:22 crc kubenswrapper[4892]: E0122 10:05:22.420330 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:05:33 crc kubenswrapper[4892]: I0122 10:05:33.418977 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:05:33 crc kubenswrapper[4892]: E0122 10:05:33.419853 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:05:45 crc kubenswrapper[4892]: I0122 10:05:45.419270 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:05:45 crc kubenswrapper[4892]: E0122 10:05:45.420169 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:06:00 crc kubenswrapper[4892]: I0122 10:06:00.418767 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:06:01 crc kubenswrapper[4892]: I0122 10:06:01.465167 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"635df7370cafeb8860a59e6538755fbe775d6f245fb65c173fd55abb94f89570"} Jan 22 10:08:16 crc kubenswrapper[4892]: I0122 10:08:16.323189 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:08:16 crc kubenswrapper[4892]: I0122 10:08:16.323740 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.387335 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hj7qm"] Jan 22 10:08:31 crc kubenswrapper[4892]: E0122 10:08:31.389054 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="extract-utilities" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.389080 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="extract-utilities" Jan 22 10:08:31 crc kubenswrapper[4892]: E0122 10:08:31.389122 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="registry-server" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.389131 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="registry-server" Jan 22 10:08:31 crc kubenswrapper[4892]: E0122 10:08:31.389149 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="extract-content" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.389157 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="extract-content" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.389420 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="58eb61a0-f599-4cdb-a15d-1cf02976a747" containerName="registry-server" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.391732 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.399861 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj7qm"] Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.497029 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-utilities\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.497531 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-catalog-content\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.497565 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpw8\" (UniqueName: \"kubernetes.io/projected/ef364a90-9f31-47d9-b247-2b98dd5a24d1-kube-api-access-xwpw8\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.600370 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-catalog-content\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.600455 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpw8\" (UniqueName: \"kubernetes.io/projected/ef364a90-9f31-47d9-b247-2b98dd5a24d1-kube-api-access-xwpw8\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.600761 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-catalog-content\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.600807 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-utilities\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.601517 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-utilities\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.626134 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpw8\" (UniqueName: \"kubernetes.io/projected/ef364a90-9f31-47d9-b247-2b98dd5a24d1-kube-api-access-xwpw8\") pod \"certified-operators-hj7qm\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:31 crc kubenswrapper[4892]: I0122 10:08:31.723978 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:32 crc kubenswrapper[4892]: I0122 10:08:32.358425 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj7qm"] Jan 22 10:08:32 crc kubenswrapper[4892]: I0122 10:08:32.829368 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerID="9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63" exitCode=0 Jan 22 10:08:32 crc kubenswrapper[4892]: I0122 10:08:32.829460 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerDied","Data":"9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63"} Jan 22 10:08:32 crc kubenswrapper[4892]: I0122 10:08:32.829745 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerStarted","Data":"b57996b56880282f8736473f3d5af24b4b6b648a601c1088b67efe940b571721"} Jan 22 10:08:32 crc kubenswrapper[4892]: I0122 10:08:32.833030 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:08:33 crc kubenswrapper[4892]: I0122 10:08:33.840340 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerStarted","Data":"2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73"} Jan 22 10:08:34 crc kubenswrapper[4892]: I0122 10:08:34.852416 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerID="2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73" exitCode=0 Jan 22 10:08:34 crc kubenswrapper[4892]: I0122 10:08:34.852485 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerDied","Data":"2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73"} Jan 22 10:08:35 crc kubenswrapper[4892]: I0122 10:08:35.863442 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerStarted","Data":"b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3"} Jan 22 10:08:41 crc kubenswrapper[4892]: I0122 10:08:41.724653 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:41 crc kubenswrapper[4892]: I0122 10:08:41.725308 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:41 crc kubenswrapper[4892]: I0122 10:08:41.767839 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:41 crc kubenswrapper[4892]: I0122 10:08:41.785084 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hj7qm" podStartSLOduration=8.266767485999999 podStartE2EDuration="10.785023951s" podCreationTimestamp="2026-01-22 10:08:31 +0000 UTC" firstStartedPulling="2026-01-22 10:08:32.832715306 +0000 UTC m=+3482.676794369" lastFinishedPulling="2026-01-22 10:08:35.350971771 +0000 UTC m=+3485.195050834" observedRunningTime="2026-01-22 10:08:35.882795807 +0000 UTC m=+3485.726874880" watchObservedRunningTime="2026-01-22 10:08:41.785023951 +0000 UTC m=+3491.629103024" Jan 22 10:08:41 crc kubenswrapper[4892]: I0122 10:08:41.955431 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:42 crc kubenswrapper[4892]: I0122 10:08:42.006891 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj7qm"] Jan 22 10:08:43 crc kubenswrapper[4892]: I0122 10:08:43.928879 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hj7qm" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="registry-server" containerID="cri-o://b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3" gracePeriod=2 Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.438250 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.557063 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwpw8\" (UniqueName: \"kubernetes.io/projected/ef364a90-9f31-47d9-b247-2b98dd5a24d1-kube-api-access-xwpw8\") pod \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.557223 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-catalog-content\") pod \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.557342 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-utilities\") pod \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\" (UID: \"ef364a90-9f31-47d9-b247-2b98dd5a24d1\") " Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.559111 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-utilities" (OuterVolumeSpecName: "utilities") pod "ef364a90-9f31-47d9-b247-2b98dd5a24d1" (UID: "ef364a90-9f31-47d9-b247-2b98dd5a24d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.563620 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef364a90-9f31-47d9-b247-2b98dd5a24d1-kube-api-access-xwpw8" (OuterVolumeSpecName: "kube-api-access-xwpw8") pod "ef364a90-9f31-47d9-b247-2b98dd5a24d1" (UID: "ef364a90-9f31-47d9-b247-2b98dd5a24d1"). InnerVolumeSpecName "kube-api-access-xwpw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.605727 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef364a90-9f31-47d9-b247-2b98dd5a24d1" (UID: "ef364a90-9f31-47d9-b247-2b98dd5a24d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.659926 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.660249 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwpw8\" (UniqueName: \"kubernetes.io/projected/ef364a90-9f31-47d9-b247-2b98dd5a24d1-kube-api-access-xwpw8\") on node \"crc\" DevicePath \"\"" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.660261 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef364a90-9f31-47d9-b247-2b98dd5a24d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.938318 4892 generic.go:334] "Generic (PLEG): container finished" podID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerID="b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3" exitCode=0 Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.938351 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj7qm" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.938366 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerDied","Data":"b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3"} Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.938399 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj7qm" event={"ID":"ef364a90-9f31-47d9-b247-2b98dd5a24d1","Type":"ContainerDied","Data":"b57996b56880282f8736473f3d5af24b4b6b648a601c1088b67efe940b571721"} Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.938421 4892 scope.go:117] "RemoveContainer" containerID="b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.960057 4892 scope.go:117] "RemoveContainer" containerID="2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73" Jan 22 10:08:44 crc kubenswrapper[4892]: I0122 10:08:44.982937 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj7qm"] Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.008733 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hj7qm"] Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.012725 4892 scope.go:117] "RemoveContainer" containerID="9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.029916 4892 scope.go:117] "RemoveContainer" containerID="b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3" Jan 22 10:08:45 crc kubenswrapper[4892]: E0122 10:08:45.030358 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3\": container with ID starting with b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3 not found: ID does not exist" containerID="b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.030398 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3"} err="failed to get container status \"b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3\": rpc error: code = NotFound desc = could not find container \"b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3\": container with ID starting with b487d4939fd54115aea05c2d7a89f3e82ebb5e88b680160bf1b9a44dd62171a3 not found: ID does not exist" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.030427 4892 scope.go:117] "RemoveContainer" containerID="2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73" Jan 22 10:08:45 crc kubenswrapper[4892]: E0122 10:08:45.030689 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73\": container with ID starting with 2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73 not found: ID does not exist" containerID="2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.030729 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73"} err="failed to get container status \"2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73\": rpc error: code = NotFound desc = could not find container \"2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73\": container with ID starting with 2f62bef68217174dd6cf1118ef2cdda0aac1f360263d060f8e9f515aec827e73 not found: ID does not exist" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.030755 4892 scope.go:117] "RemoveContainer" containerID="9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63" Jan 22 10:08:45 crc kubenswrapper[4892]: E0122 10:08:45.031024 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63\": container with ID starting with 9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63 not found: ID does not exist" containerID="9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.031127 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63"} err="failed to get container status \"9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63\": rpc error: code = NotFound desc = could not find container \"9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63\": container with ID starting with 9edc2b36da942657a9af49fae9b52716de1b3dd1d48e8bd2af4e04fb373a6f63 not found: ID does not exist" Jan 22 10:08:45 crc kubenswrapper[4892]: I0122 10:08:45.430950 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" path="/var/lib/kubelet/pods/ef364a90-9f31-47d9-b247-2b98dd5a24d1/volumes" Jan 22 10:08:46 crc kubenswrapper[4892]: I0122 10:08:46.323718 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:08:46 crc kubenswrapper[4892]: I0122 10:08:46.324031 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.845763 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7wcsc"] Jan 22 10:08:53 crc kubenswrapper[4892]: E0122 10:08:53.847061 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="extract-content" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.847075 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="extract-content" Jan 22 10:08:53 crc kubenswrapper[4892]: E0122 10:08:53.847116 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="extract-utilities" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.847125 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="extract-utilities" Jan 22 10:08:53 crc kubenswrapper[4892]: E0122 10:08:53.847151 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="registry-server" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.847159 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="registry-server" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.847515 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef364a90-9f31-47d9-b247-2b98dd5a24d1" containerName="registry-server" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.849784 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.867661 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-utilities\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.867984 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjzz\" (UniqueName: \"kubernetes.io/projected/a523d36e-88ee-4ec6-81f5-0df7100f05ef-kube-api-access-ffjzz\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.868145 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-catalog-content\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.921345 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wcsc"] Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.969926 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjzz\" (UniqueName: \"kubernetes.io/projected/a523d36e-88ee-4ec6-81f5-0df7100f05ef-kube-api-access-ffjzz\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.970330 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-catalog-content\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.970471 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-utilities\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.970964 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-catalog-content\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.970983 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-utilities\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:53 crc kubenswrapper[4892]: I0122 10:08:53.989464 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjzz\" (UniqueName: \"kubernetes.io/projected/a523d36e-88ee-4ec6-81f5-0df7100f05ef-kube-api-access-ffjzz\") pod \"redhat-marketplace-7wcsc\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:54 crc kubenswrapper[4892]: I0122 10:08:54.255129 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:08:54 crc kubenswrapper[4892]: I0122 10:08:54.765219 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wcsc"] Jan 22 10:08:55 crc kubenswrapper[4892]: I0122 10:08:55.027409 4892 generic.go:334] "Generic (PLEG): container finished" podID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerID="618bdeeb6ac4dc29b2edc235c9a7238ab75ddd9e23e8aed88e54bc23d07d67ad" exitCode=0 Jan 22 10:08:55 crc kubenswrapper[4892]: I0122 10:08:55.027487 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wcsc" event={"ID":"a523d36e-88ee-4ec6-81f5-0df7100f05ef","Type":"ContainerDied","Data":"618bdeeb6ac4dc29b2edc235c9a7238ab75ddd9e23e8aed88e54bc23d07d67ad"} Jan 22 10:08:55 crc kubenswrapper[4892]: I0122 10:08:55.027697 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wcsc" event={"ID":"a523d36e-88ee-4ec6-81f5-0df7100f05ef","Type":"ContainerStarted","Data":"f7073b721efd6762c2da8ca1f4013f00a766a77587a6f57ecd0f65f242c229bc"} Jan 22 10:08:57 crc kubenswrapper[4892]: I0122 10:08:57.047560 4892 generic.go:334] "Generic (PLEG): container finished" podID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerID="6284959c672d8ce98221b3e2a4d9b8e4ff9b632adf9a9f348cff88f102791a11" exitCode=0 Jan 22 10:08:57 crc kubenswrapper[4892]: I0122 10:08:57.047595 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wcsc" event={"ID":"a523d36e-88ee-4ec6-81f5-0df7100f05ef","Type":"ContainerDied","Data":"6284959c672d8ce98221b3e2a4d9b8e4ff9b632adf9a9f348cff88f102791a11"} Jan 22 10:08:58 crc kubenswrapper[4892]: I0122 10:08:58.061066 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wcsc" event={"ID":"a523d36e-88ee-4ec6-81f5-0df7100f05ef","Type":"ContainerStarted","Data":"68674af1295f4c6c0f6054cbcb5af4e7d996f81f2ca8ea318677cbeca7ad4316"} Jan 22 10:08:58 crc kubenswrapper[4892]: I0122 10:08:58.083176 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7wcsc" podStartSLOduration=2.658133865 podStartE2EDuration="5.08315619s" podCreationTimestamp="2026-01-22 10:08:53 +0000 UTC" firstStartedPulling="2026-01-22 10:08:55.028882384 +0000 UTC m=+3504.872961447" lastFinishedPulling="2026-01-22 10:08:57.453904709 +0000 UTC m=+3507.297983772" observedRunningTime="2026-01-22 10:08:58.077038287 +0000 UTC m=+3507.921117370" watchObservedRunningTime="2026-01-22 10:08:58.08315619 +0000 UTC m=+3507.927235253" Jan 22 10:09:04 crc kubenswrapper[4892]: I0122 10:09:04.256104 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:09:04 crc kubenswrapper[4892]: I0122 10:09:04.256672 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:09:04 crc kubenswrapper[4892]: I0122 10:09:04.305655 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:09:05 crc kubenswrapper[4892]: I0122 10:09:05.182467 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:09:05 crc kubenswrapper[4892]: I0122 10:09:05.244711 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wcsc"] Jan 22 10:09:07 crc kubenswrapper[4892]: I0122 10:09:07.149782 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7wcsc" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="registry-server" containerID="cri-o://68674af1295f4c6c0f6054cbcb5af4e7d996f81f2ca8ea318677cbeca7ad4316" gracePeriod=2 Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.159594 4892 generic.go:334] "Generic (PLEG): container finished" podID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerID="68674af1295f4c6c0f6054cbcb5af4e7d996f81f2ca8ea318677cbeca7ad4316" exitCode=0 Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.159799 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wcsc" event={"ID":"a523d36e-88ee-4ec6-81f5-0df7100f05ef","Type":"ContainerDied","Data":"68674af1295f4c6c0f6054cbcb5af4e7d996f81f2ca8ea318677cbeca7ad4316"} Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.265707 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.416722 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjzz\" (UniqueName: \"kubernetes.io/projected/a523d36e-88ee-4ec6-81f5-0df7100f05ef-kube-api-access-ffjzz\") pod \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.416848 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-utilities\") pod \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.417932 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-utilities" (OuterVolumeSpecName: "utilities") pod "a523d36e-88ee-4ec6-81f5-0df7100f05ef" (UID: "a523d36e-88ee-4ec6-81f5-0df7100f05ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.418993 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-catalog-content\") pod \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\" (UID: \"a523d36e-88ee-4ec6-81f5-0df7100f05ef\") " Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.419665 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.424072 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a523d36e-88ee-4ec6-81f5-0df7100f05ef-kube-api-access-ffjzz" (OuterVolumeSpecName: "kube-api-access-ffjzz") pod "a523d36e-88ee-4ec6-81f5-0df7100f05ef" (UID: "a523d36e-88ee-4ec6-81f5-0df7100f05ef"). InnerVolumeSpecName "kube-api-access-ffjzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.440829 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a523d36e-88ee-4ec6-81f5-0df7100f05ef" (UID: "a523d36e-88ee-4ec6-81f5-0df7100f05ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.522962 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a523d36e-88ee-4ec6-81f5-0df7100f05ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:09:08 crc kubenswrapper[4892]: I0122 10:09:08.522990 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjzz\" (UniqueName: \"kubernetes.io/projected/a523d36e-88ee-4ec6-81f5-0df7100f05ef-kube-api-access-ffjzz\") on node \"crc\" DevicePath \"\"" Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.170876 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wcsc" event={"ID":"a523d36e-88ee-4ec6-81f5-0df7100f05ef","Type":"ContainerDied","Data":"f7073b721efd6762c2da8ca1f4013f00a766a77587a6f57ecd0f65f242c229bc"} Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.171241 4892 scope.go:117] "RemoveContainer" containerID="68674af1295f4c6c0f6054cbcb5af4e7d996f81f2ca8ea318677cbeca7ad4316" Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.170942 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wcsc" Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.193647 4892 scope.go:117] "RemoveContainer" containerID="6284959c672d8ce98221b3e2a4d9b8e4ff9b632adf9a9f348cff88f102791a11" Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.207563 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wcsc"] Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.218253 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wcsc"] Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.231984 4892 scope.go:117] "RemoveContainer" containerID="618bdeeb6ac4dc29b2edc235c9a7238ab75ddd9e23e8aed88e54bc23d07d67ad" Jan 22 10:09:09 crc kubenswrapper[4892]: I0122 10:09:09.431583 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" path="/var/lib/kubelet/pods/a523d36e-88ee-4ec6-81f5-0df7100f05ef/volumes" Jan 22 10:09:16 crc kubenswrapper[4892]: I0122 10:09:16.323572 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:09:16 crc kubenswrapper[4892]: I0122 10:09:16.324179 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:09:16 crc kubenswrapper[4892]: I0122 10:09:16.324243 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:09:16 crc kubenswrapper[4892]: I0122 10:09:16.325031 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"635df7370cafeb8860a59e6538755fbe775d6f245fb65c173fd55abb94f89570"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:09:16 crc kubenswrapper[4892]: I0122 10:09:16.325093 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://635df7370cafeb8860a59e6538755fbe775d6f245fb65c173fd55abb94f89570" gracePeriod=600 Jan 22 10:09:17 crc kubenswrapper[4892]: I0122 10:09:17.243199 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="635df7370cafeb8860a59e6538755fbe775d6f245fb65c173fd55abb94f89570" exitCode=0 Jan 22 10:09:17 crc kubenswrapper[4892]: I0122 10:09:17.243293 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"635df7370cafeb8860a59e6538755fbe775d6f245fb65c173fd55abb94f89570"} Jan 22 10:09:17 crc kubenswrapper[4892]: I0122 10:09:17.243777 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf"} Jan 22 10:09:17 crc kubenswrapper[4892]: I0122 10:09:17.243805 4892 scope.go:117] "RemoveContainer" containerID="f02b7adbd7ecf9d9310d1d7218fcd8432ecf37dc2ae44231d1b4fbaeaaf19a06" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.091830 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xh58l"] Jan 22 10:10:00 crc kubenswrapper[4892]: E0122 10:10:00.096093 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="extract-content" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.096116 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="extract-content" Jan 22 10:10:00 crc kubenswrapper[4892]: E0122 10:10:00.096130 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="extract-utilities" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.096138 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="extract-utilities" Jan 22 10:10:00 crc kubenswrapper[4892]: E0122 10:10:00.096153 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="registry-server" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.096160 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="registry-server" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.096459 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="a523d36e-88ee-4ec6-81f5-0df7100f05ef" containerName="registry-server" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.099692 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.116318 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh58l"] Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.254748 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-catalog-content\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.255045 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-utilities\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.255177 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrzv\" (UniqueName: \"kubernetes.io/projected/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-kube-api-access-fsrzv\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.356615 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-catalog-content\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.356906 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-utilities\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.357008 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrzv\" (UniqueName: \"kubernetes.io/projected/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-kube-api-access-fsrzv\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.357372 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-catalog-content\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.357555 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-utilities\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.377671 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrzv\" (UniqueName: \"kubernetes.io/projected/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-kube-api-access-fsrzv\") pod \"redhat-operators-xh58l\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.429693 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:00 crc kubenswrapper[4892]: I0122 10:10:00.930706 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xh58l"] Jan 22 10:10:01 crc kubenswrapper[4892]: I0122 10:10:01.618813 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerStarted","Data":"282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1"} Jan 22 10:10:01 crc kubenswrapper[4892]: I0122 10:10:01.619127 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerStarted","Data":"648ca618a8a28efb9cb4a4f6f207775da27f19afe83fbff729d352970cae3898"} Jan 22 10:10:02 crc kubenswrapper[4892]: I0122 10:10:02.627818 4892 generic.go:334] "Generic (PLEG): container finished" podID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerID="282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1" exitCode=0 Jan 22 10:10:02 crc kubenswrapper[4892]: I0122 10:10:02.627865 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerDied","Data":"282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1"} Jan 22 10:10:04 crc kubenswrapper[4892]: I0122 10:10:04.647061 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerStarted","Data":"4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43"} Jan 22 10:10:05 crc kubenswrapper[4892]: I0122 10:10:05.660067 4892 generic.go:334] "Generic (PLEG): container finished" podID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerID="4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43" exitCode=0 Jan 22 10:10:05 crc kubenswrapper[4892]: I0122 10:10:05.660148 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerDied","Data":"4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43"} Jan 22 10:10:08 crc kubenswrapper[4892]: I0122 10:10:08.687676 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerStarted","Data":"7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b"} Jan 22 10:10:08 crc kubenswrapper[4892]: I0122 10:10:08.719360 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xh58l" podStartSLOduration=3.281815744 podStartE2EDuration="8.71934345s" podCreationTimestamp="2026-01-22 10:10:00 +0000 UTC" firstStartedPulling="2026-01-22 10:10:02.629891956 +0000 UTC m=+3572.473971019" lastFinishedPulling="2026-01-22 10:10:08.067419662 +0000 UTC m=+3577.911498725" observedRunningTime="2026-01-22 10:10:08.716875358 +0000 UTC m=+3578.560954431" watchObservedRunningTime="2026-01-22 10:10:08.71934345 +0000 UTC m=+3578.563422513" Jan 22 10:10:10 crc kubenswrapper[4892]: I0122 10:10:10.430594 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:10 crc kubenswrapper[4892]: I0122 10:10:10.430979 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:11 crc kubenswrapper[4892]: I0122 10:10:11.479556 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xh58l" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="registry-server" probeResult="failure" output=< Jan 22 10:10:11 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 10:10:11 crc kubenswrapper[4892]: > Jan 22 10:10:20 crc kubenswrapper[4892]: I0122 10:10:20.478979 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:20 crc kubenswrapper[4892]: I0122 10:10:20.536994 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:20 crc kubenswrapper[4892]: I0122 10:10:20.721084 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh58l"] Jan 22 10:10:21 crc kubenswrapper[4892]: I0122 10:10:21.795530 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xh58l" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="registry-server" containerID="cri-o://7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b" gracePeriod=2 Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.320054 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.415307 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-catalog-content\") pod \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.415457 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-utilities\") pod \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.415566 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsrzv\" (UniqueName: \"kubernetes.io/projected/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-kube-api-access-fsrzv\") pod \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\" (UID: \"efdf7a9e-2fd6-4af6-af1a-0248f4f35967\") " Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.416247 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-utilities" (OuterVolumeSpecName: "utilities") pod "efdf7a9e-2fd6-4af6-af1a-0248f4f35967" (UID: "efdf7a9e-2fd6-4af6-af1a-0248f4f35967"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.423916 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-kube-api-access-fsrzv" (OuterVolumeSpecName: "kube-api-access-fsrzv") pod "efdf7a9e-2fd6-4af6-af1a-0248f4f35967" (UID: "efdf7a9e-2fd6-4af6-af1a-0248f4f35967"). InnerVolumeSpecName "kube-api-access-fsrzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.517823 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsrzv\" (UniqueName: \"kubernetes.io/projected/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-kube-api-access-fsrzv\") on node \"crc\" DevicePath \"\"" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.517861 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.534058 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efdf7a9e-2fd6-4af6-af1a-0248f4f35967" (UID: "efdf7a9e-2fd6-4af6-af1a-0248f4f35967"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.619769 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdf7a9e-2fd6-4af6-af1a-0248f4f35967-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.816737 4892 generic.go:334] "Generic (PLEG): container finished" podID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerID="7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b" exitCode=0 Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.816795 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerDied","Data":"7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b"} Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.816833 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xh58l" event={"ID":"efdf7a9e-2fd6-4af6-af1a-0248f4f35967","Type":"ContainerDied","Data":"648ca618a8a28efb9cb4a4f6f207775da27f19afe83fbff729d352970cae3898"} Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.816868 4892 scope.go:117] "RemoveContainer" containerID="7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.816896 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xh58l" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.855163 4892 scope.go:117] "RemoveContainer" containerID="4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.860085 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xh58l"] Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.873377 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xh58l"] Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.882304 4892 scope.go:117] "RemoveContainer" containerID="282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.919561 4892 scope.go:117] "RemoveContainer" containerID="7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b" Jan 22 10:10:22 crc kubenswrapper[4892]: E0122 10:10:22.920329 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b\": container with ID starting with 7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b not found: ID does not exist" containerID="7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.920387 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b"} err="failed to get container status \"7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b\": rpc error: code = NotFound desc = could not find container \"7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b\": container with ID starting with 7863bd9be55b16540a3c33b3efd65ef7595f1d1a64760a335a6eeb72f2bafc7b not found: ID does not exist" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.920423 4892 scope.go:117] "RemoveContainer" containerID="4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43" Jan 22 10:10:22 crc kubenswrapper[4892]: E0122 10:10:22.920786 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43\": container with ID starting with 4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43 not found: ID does not exist" containerID="4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.920881 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43"} err="failed to get container status \"4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43\": rpc error: code = NotFound desc = could not find container \"4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43\": container with ID starting with 4e507974bb6c0d4cf09b71c838dbb2d0687e0807b09896da6a35222f52810e43 not found: ID does not exist" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.920965 4892 scope.go:117] "RemoveContainer" containerID="282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1" Jan 22 10:10:22 crc kubenswrapper[4892]: E0122 10:10:22.921263 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1\": container with ID starting with 282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1 not found: ID does not exist" containerID="282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1" Jan 22 10:10:22 crc kubenswrapper[4892]: I0122 10:10:22.921305 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1"} err="failed to get container status \"282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1\": rpc error: code = NotFound desc = could not find container \"282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1\": container with ID starting with 282256a41e79b3e7b4d28eb118b57a548b1e107f90646c85cfa97a88731af0c1 not found: ID does not exist" Jan 22 10:10:23 crc kubenswrapper[4892]: I0122 10:10:23.430141 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" path="/var/lib/kubelet/pods/efdf7a9e-2fd6-4af6-af1a-0248f4f35967/volumes" Jan 22 10:11:16 crc kubenswrapper[4892]: I0122 10:11:16.323458 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:11:16 crc kubenswrapper[4892]: I0122 10:11:16.323995 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:11:46 crc kubenswrapper[4892]: I0122 10:11:46.323545 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:11:46 crc kubenswrapper[4892]: I0122 10:11:46.324143 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.323040 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.323684 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.323734 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.324613 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.324690 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" gracePeriod=600 Jan 22 10:12:16 crc kubenswrapper[4892]: E0122 10:12:16.446898 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:12:16 crc kubenswrapper[4892]: E0122 10:12:16.561793 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4765e554_3060_4876_90fe_5e054619d7a1.slice/crio-conmon-10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.870974 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" exitCode=0 Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.871234 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf"} Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.871427 4892 scope.go:117] "RemoveContainer" containerID="635df7370cafeb8860a59e6538755fbe775d6f245fb65c173fd55abb94f89570" Jan 22 10:12:16 crc kubenswrapper[4892]: I0122 10:12:16.872045 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:12:16 crc kubenswrapper[4892]: E0122 10:12:16.872487 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:12:26 crc kubenswrapper[4892]: E0122 10:12:26.812856 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 22 10:12:29 crc kubenswrapper[4892]: I0122 10:12:29.418575 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:12:29 crc kubenswrapper[4892]: E0122 10:12:29.419318 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:12:40 crc kubenswrapper[4892]: I0122 10:12:40.063012 4892 generic.go:334] "Generic (PLEG): container finished" podID="13171535-bfb7-4114-884d-b9b031615de3" containerID="e20e5b958b227c79c109817fc0b6e5f82b67a482502a8bf37f96d59d0a4232f6" exitCode=0 Jan 22 10:12:40 crc kubenswrapper[4892]: I0122 10:12:40.063264 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"13171535-bfb7-4114-884d-b9b031615de3","Type":"ContainerDied","Data":"e20e5b958b227c79c109817fc0b6e5f82b67a482502a8bf37f96d59d0a4232f6"} Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.443054 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623373 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-openstack-config-secret\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623477 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97jrb\" (UniqueName: \"kubernetes.io/projected/13171535-bfb7-4114-884d-b9b031615de3-kube-api-access-97jrb\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623531 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-workdir\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623613 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623675 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ssh-key\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623708 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-openstack-config\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623745 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-temporary\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.623805 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-config-data\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.624233 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.624348 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ca-certs\") pod \"13171535-bfb7-4114-884d-b9b031615de3\" (UID: \"13171535-bfb7-4114-884d-b9b031615de3\") " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.624560 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-config-data" (OuterVolumeSpecName: "config-data") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.625238 4892 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.625267 4892 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.626862 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.629123 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13171535-bfb7-4114-884d-b9b031615de3-kube-api-access-97jrb" (OuterVolumeSpecName: "kube-api-access-97jrb") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "kube-api-access-97jrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.631356 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.651888 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.657432 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.662705 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.676593 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "13171535-bfb7-4114-884d-b9b031615de3" (UID: "13171535-bfb7-4114-884d-b9b031615de3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727495 4892 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727530 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/13171535-bfb7-4114-884d-b9b031615de3-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727542 4892 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727551 4892 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13171535-bfb7-4114-884d-b9b031615de3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727560 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97jrb\" (UniqueName: \"kubernetes.io/projected/13171535-bfb7-4114-884d-b9b031615de3-kube-api-access-97jrb\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727571 4892 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13171535-bfb7-4114-884d-b9b031615de3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.727606 4892 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.746805 4892 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 10:12:41 crc kubenswrapper[4892]: I0122 10:12:41.828750 4892 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:42 crc kubenswrapper[4892]: I0122 10:12:42.082371 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"13171535-bfb7-4114-884d-b9b031615de3","Type":"ContainerDied","Data":"782fd4812f414a1192bc43eb8731865d9633a31acedb60c8c21fff416cec2d0e"} Jan 22 10:12:42 crc kubenswrapper[4892]: I0122 10:12:42.082408 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782fd4812f414a1192bc43eb8731865d9633a31acedb60c8c21fff416cec2d0e" Jan 22 10:12:42 crc kubenswrapper[4892]: I0122 10:12:42.082469 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.038268 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 10:12:44 crc kubenswrapper[4892]: E0122 10:12:44.039086 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="registry-server" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.039103 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="registry-server" Jan 22 10:12:44 crc kubenswrapper[4892]: E0122 10:12:44.039135 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="extract-content" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.039144 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="extract-content" Jan 22 10:12:44 crc kubenswrapper[4892]: E0122 10:12:44.039155 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13171535-bfb7-4114-884d-b9b031615de3" containerName="tempest-tests-tempest-tests-runner" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.039164 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="13171535-bfb7-4114-884d-b9b031615de3" containerName="tempest-tests-tempest-tests-runner" Jan 22 10:12:44 crc kubenswrapper[4892]: E0122 10:12:44.039180 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="extract-utilities" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.039188 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="extract-utilities" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.039478 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdf7a9e-2fd6-4af6-af1a-0248f4f35967" containerName="registry-server" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.039504 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="13171535-bfb7-4114-884d-b9b031615de3" containerName="tempest-tests-tempest-tests-runner" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.040324 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.042676 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7c4lt" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.058042 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.173222 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.173824 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdfq8\" (UniqueName: \"kubernetes.io/projected/6efa520d-5ef8-49b9-b90f-197efdf100ed-kube-api-access-cdfq8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.276225 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.276323 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdfq8\" (UniqueName: \"kubernetes.io/projected/6efa520d-5ef8-49b9-b90f-197efdf100ed-kube-api-access-cdfq8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.276946 4892 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.304030 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdfq8\" (UniqueName: \"kubernetes.io/projected/6efa520d-5ef8-49b9-b90f-197efdf100ed-kube-api-access-cdfq8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.304848 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6efa520d-5ef8-49b9-b90f-197efdf100ed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.368925 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.419361 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:12:44 crc kubenswrapper[4892]: E0122 10:12:44.421118 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:12:44 crc kubenswrapper[4892]: I0122 10:12:44.849670 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 10:12:45 crc kubenswrapper[4892]: I0122 10:12:45.111497 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6efa520d-5ef8-49b9-b90f-197efdf100ed","Type":"ContainerStarted","Data":"9383c9d32f74bc7a00a4f99ad4dd0e61a66a1e08f1e2e86b521c96c9386476ec"} Jan 22 10:12:46 crc kubenswrapper[4892]: I0122 10:12:46.122124 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6efa520d-5ef8-49b9-b90f-197efdf100ed","Type":"ContainerStarted","Data":"fcffda7d783abd3b778d827a3d950ec9a152d7aab0871ab02099631c15b59221"} Jan 22 10:12:46 crc kubenswrapper[4892]: I0122 10:12:46.144757 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.309272295 podStartE2EDuration="2.144741441s" podCreationTimestamp="2026-01-22 10:12:44 +0000 UTC" firstStartedPulling="2026-01-22 10:12:44.861406514 +0000 UTC m=+3734.705485577" lastFinishedPulling="2026-01-22 10:12:45.69687565 +0000 UTC m=+3735.540954723" observedRunningTime="2026-01-22 10:12:46.140780321 +0000 UTC m=+3735.984859384" watchObservedRunningTime="2026-01-22 10:12:46.144741441 +0000 UTC m=+3735.988820494" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.723411 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l6dk7"] Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.726096 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.735228 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6dk7"] Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.754664 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-catalog-content\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.754746 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9h2v\" (UniqueName: \"kubernetes.io/projected/28838fa1-8623-491b-a87e-3323c10e5eda-kube-api-access-j9h2v\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.754796 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-utilities\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.856350 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-utilities\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.856816 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-catalog-content\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.856882 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9h2v\" (UniqueName: \"kubernetes.io/projected/28838fa1-8623-491b-a87e-3323c10e5eda-kube-api-access-j9h2v\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.856957 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-utilities\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.857228 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-catalog-content\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:53 crc kubenswrapper[4892]: I0122 10:12:53.882301 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9h2v\" (UniqueName: \"kubernetes.io/projected/28838fa1-8623-491b-a87e-3323c10e5eda-kube-api-access-j9h2v\") pod \"community-operators-l6dk7\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:54 crc kubenswrapper[4892]: I0122 10:12:54.050679 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:12:54 crc kubenswrapper[4892]: I0122 10:12:54.631177 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6dk7"] Jan 22 10:12:55 crc kubenswrapper[4892]: I0122 10:12:55.285525 4892 generic.go:334] "Generic (PLEG): container finished" podID="28838fa1-8623-491b-a87e-3323c10e5eda" containerID="8b4f8ff64c5c8ef0720dcf09f3c6780a2b60a0e71fbc4e3c77e8f42d9494a226" exitCode=0 Jan 22 10:12:55 crc kubenswrapper[4892]: I0122 10:12:55.285634 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerDied","Data":"8b4f8ff64c5c8ef0720dcf09f3c6780a2b60a0e71fbc4e3c77e8f42d9494a226"} Jan 22 10:12:55 crc kubenswrapper[4892]: I0122 10:12:55.285802 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerStarted","Data":"f5cdb0ea1e3857883ea1b81525b204c48091eb84ed2c3abfc5d41e3c51cefea9"} Jan 22 10:12:56 crc kubenswrapper[4892]: I0122 10:12:56.313343 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerStarted","Data":"17081fda690154c325b201925fceb3c9232ea3f7a13482a73ff3a57ed9516310"} Jan 22 10:12:57 crc kubenswrapper[4892]: I0122 10:12:57.321724 4892 generic.go:334] "Generic (PLEG): container finished" podID="28838fa1-8623-491b-a87e-3323c10e5eda" containerID="17081fda690154c325b201925fceb3c9232ea3f7a13482a73ff3a57ed9516310" exitCode=0 Jan 22 10:12:57 crc kubenswrapper[4892]: I0122 10:12:57.321796 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerDied","Data":"17081fda690154c325b201925fceb3c9232ea3f7a13482a73ff3a57ed9516310"} Jan 22 10:12:57 crc kubenswrapper[4892]: I0122 10:12:57.418169 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:12:57 crc kubenswrapper[4892]: E0122 10:12:57.418411 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:12:58 crc kubenswrapper[4892]: I0122 10:12:58.332263 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerStarted","Data":"36ffe5f5f149f58698a5b70ed0025419bafe85174537a888850670a98793e850"} Jan 22 10:12:58 crc kubenswrapper[4892]: I0122 10:12:58.357523 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l6dk7" podStartSLOduration=2.8049694069999997 podStartE2EDuration="5.357504424s" podCreationTimestamp="2026-01-22 10:12:53 +0000 UTC" firstStartedPulling="2026-01-22 10:12:55.286856749 +0000 UTC m=+3745.130935802" lastFinishedPulling="2026-01-22 10:12:57.839391756 +0000 UTC m=+3747.683470819" observedRunningTime="2026-01-22 10:12:58.347013999 +0000 UTC m=+3748.191093062" watchObservedRunningTime="2026-01-22 10:12:58.357504424 +0000 UTC m=+3748.201583487" Jan 22 10:13:04 crc kubenswrapper[4892]: I0122 10:13:04.052299 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:13:04 crc kubenswrapper[4892]: I0122 10:13:04.052971 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:13:04 crc kubenswrapper[4892]: I0122 10:13:04.104310 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:13:04 crc kubenswrapper[4892]: I0122 10:13:04.424429 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:13:04 crc kubenswrapper[4892]: I0122 10:13:04.482544 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6dk7"] Jan 22 10:13:06 crc kubenswrapper[4892]: I0122 10:13:06.396267 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l6dk7" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="registry-server" containerID="cri-o://36ffe5f5f149f58698a5b70ed0025419bafe85174537a888850670a98793e850" gracePeriod=2 Jan 22 10:13:07 crc kubenswrapper[4892]: I0122 10:13:07.407216 4892 generic.go:334] "Generic (PLEG): container finished" podID="28838fa1-8623-491b-a87e-3323c10e5eda" containerID="36ffe5f5f149f58698a5b70ed0025419bafe85174537a888850670a98793e850" exitCode=0 Jan 22 10:13:07 crc kubenswrapper[4892]: I0122 10:13:07.407303 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerDied","Data":"36ffe5f5f149f58698a5b70ed0025419bafe85174537a888850670a98793e850"} Jan 22 10:13:07 crc kubenswrapper[4892]: I0122 10:13:07.954490 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.042963 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-utilities\") pod \"28838fa1-8623-491b-a87e-3323c10e5eda\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.043029 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9h2v\" (UniqueName: \"kubernetes.io/projected/28838fa1-8623-491b-a87e-3323c10e5eda-kube-api-access-j9h2v\") pod \"28838fa1-8623-491b-a87e-3323c10e5eda\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.043091 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-catalog-content\") pod \"28838fa1-8623-491b-a87e-3323c10e5eda\" (UID: \"28838fa1-8623-491b-a87e-3323c10e5eda\") " Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.044228 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-utilities" (OuterVolumeSpecName: "utilities") pod "28838fa1-8623-491b-a87e-3323c10e5eda" (UID: "28838fa1-8623-491b-a87e-3323c10e5eda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.049443 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28838fa1-8623-491b-a87e-3323c10e5eda-kube-api-access-j9h2v" (OuterVolumeSpecName: "kube-api-access-j9h2v") pod "28838fa1-8623-491b-a87e-3323c10e5eda" (UID: "28838fa1-8623-491b-a87e-3323c10e5eda"). InnerVolumeSpecName "kube-api-access-j9h2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.105251 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28838fa1-8623-491b-a87e-3323c10e5eda" (UID: "28838fa1-8623-491b-a87e-3323c10e5eda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.144874 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.145173 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9h2v\" (UniqueName: \"kubernetes.io/projected/28838fa1-8623-491b-a87e-3323c10e5eda-kube-api-access-j9h2v\") on node \"crc\" DevicePath \"\"" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.145183 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28838fa1-8623-491b-a87e-3323c10e5eda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.417168 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6dk7" event={"ID":"28838fa1-8623-491b-a87e-3323c10e5eda","Type":"ContainerDied","Data":"f5cdb0ea1e3857883ea1b81525b204c48091eb84ed2c3abfc5d41e3c51cefea9"} Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.417218 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6dk7" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.417234 4892 scope.go:117] "RemoveContainer" containerID="36ffe5f5f149f58698a5b70ed0025419bafe85174537a888850670a98793e850" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.419065 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:13:08 crc kubenswrapper[4892]: E0122 10:13:08.419339 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.442515 4892 scope.go:117] "RemoveContainer" containerID="17081fda690154c325b201925fceb3c9232ea3f7a13482a73ff3a57ed9516310" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.460963 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6dk7"] Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.473657 4892 scope.go:117] "RemoveContainer" containerID="8b4f8ff64c5c8ef0720dcf09f3c6780a2b60a0e71fbc4e3c77e8f42d9494a226" Jan 22 10:13:08 crc kubenswrapper[4892]: I0122 10:13:08.475779 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l6dk7"] Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.025296 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qgdnd/must-gather-z7s6w"] Jan 22 10:13:09 crc kubenswrapper[4892]: E0122 10:13:09.025702 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="extract-content" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.025723 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="extract-content" Jan 22 10:13:09 crc kubenswrapper[4892]: E0122 10:13:09.025740 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="extract-utilities" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.025749 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="extract-utilities" Jan 22 10:13:09 crc kubenswrapper[4892]: E0122 10:13:09.025777 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="registry-server" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.025784 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="registry-server" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.025960 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" containerName="registry-server" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.026892 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.031426 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qgdnd"/"openshift-service-ca.crt" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.031680 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qgdnd"/"default-dockercfg-2sgrc" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.032652 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qgdnd"/"kube-root-ca.crt" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.042036 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qgdnd/must-gather-z7s6w"] Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.064302 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-must-gather-output\") pod \"must-gather-z7s6w\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.064419 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm95z\" (UniqueName: \"kubernetes.io/projected/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-kube-api-access-gm95z\") pod \"must-gather-z7s6w\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.166183 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-must-gather-output\") pod \"must-gather-z7s6w\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.166254 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm95z\" (UniqueName: \"kubernetes.io/projected/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-kube-api-access-gm95z\") pod \"must-gather-z7s6w\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.166704 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-must-gather-output\") pod \"must-gather-z7s6w\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.360278 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm95z\" (UniqueName: \"kubernetes.io/projected/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-kube-api-access-gm95z\") pod \"must-gather-z7s6w\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.430036 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28838fa1-8623-491b-a87e-3323c10e5eda" path="/var/lib/kubelet/pods/28838fa1-8623-491b-a87e-3323c10e5eda/volumes" Jan 22 10:13:09 crc kubenswrapper[4892]: I0122 10:13:09.645051 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:13:10 crc kubenswrapper[4892]: I0122 10:13:10.171563 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qgdnd/must-gather-z7s6w"] Jan 22 10:13:10 crc kubenswrapper[4892]: I0122 10:13:10.440836 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" event={"ID":"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3","Type":"ContainerStarted","Data":"5f3572cfe447184e50dbccdf10fdf724a28f539eb6c50bb418a68608d99c83c8"} Jan 22 10:13:17 crc kubenswrapper[4892]: I0122 10:13:17.507776 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" event={"ID":"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3","Type":"ContainerStarted","Data":"505f593942a9adf3f54b17348e63bd5ac5794e476bdab09c68f356daae6c2c46"} Jan 22 10:13:17 crc kubenswrapper[4892]: I0122 10:13:17.508381 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" event={"ID":"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3","Type":"ContainerStarted","Data":"a32dde9898a572cc0dd463aaaee71a5dd675112fa151f7af9467d42a3c49debe"} Jan 22 10:13:17 crc kubenswrapper[4892]: I0122 10:13:17.529273 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" podStartSLOduration=2.096150682 podStartE2EDuration="8.529251544s" podCreationTimestamp="2026-01-22 10:13:09 +0000 UTC" firstStartedPulling="2026-01-22 10:13:10.175954431 +0000 UTC m=+3760.020033494" lastFinishedPulling="2026-01-22 10:13:16.609055293 +0000 UTC m=+3766.453134356" observedRunningTime="2026-01-22 10:13:17.524097303 +0000 UTC m=+3767.368176366" watchObservedRunningTime="2026-01-22 10:13:17.529251544 +0000 UTC m=+3767.373330607" Jan 22 10:13:19 crc kubenswrapper[4892]: I0122 10:13:19.419171 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:13:19 crc kubenswrapper[4892]: E0122 10:13:19.419829 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.477361 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-gr9vt"] Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.479793 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.515674 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-host\") pod \"crc-debug-gr9vt\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.516141 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rctv\" (UniqueName: \"kubernetes.io/projected/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-kube-api-access-8rctv\") pod \"crc-debug-gr9vt\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.618064 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rctv\" (UniqueName: \"kubernetes.io/projected/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-kube-api-access-8rctv\") pod \"crc-debug-gr9vt\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.618462 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-host\") pod \"crc-debug-gr9vt\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.618589 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-host\") pod \"crc-debug-gr9vt\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.639623 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rctv\" (UniqueName: \"kubernetes.io/projected/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-kube-api-access-8rctv\") pod \"crc-debug-gr9vt\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:20 crc kubenswrapper[4892]: I0122 10:13:20.799522 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:13:21 crc kubenswrapper[4892]: I0122 10:13:21.562256 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" event={"ID":"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9","Type":"ContainerStarted","Data":"9e9e34eafe75c0a178ed54c2fecfaec7c4fcbbaebb553c9c5eaed4d820d4f7cd"} Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.791652 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59ddd484c6-7p5xf_b812f439-988c-4120-8b36-e21df38c2b97/barbican-api-log/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.812595 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59ddd484c6-7p5xf_b812f439-988c-4120-8b36-e21df38c2b97/barbican-api/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.846918 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cbdbc497d-dqskw_4d7e7ea0-d123-41ab-bd59-0f6da52316bd/barbican-keystone-listener-log/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.859702 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cbdbc497d-dqskw_4d7e7ea0-d123-41ab-bd59-0f6da52316bd/barbican-keystone-listener/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.882383 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c7f45b4bf-xx9p2_1a6b0877-2c23-4ebd-a433-620571e4c0bf/barbican-worker-log/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.888565 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c7f45b4bf-xx9p2_1a6b0877-2c23-4ebd-a433-620571e4c0bf/barbican-worker/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.925847 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk_3ca49e96-a4fc-4e54-bb55-b32d42d72734/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.959999 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/ceilometer-central-agent/0.log" Jan 22 10:13:22 crc kubenswrapper[4892]: I0122 10:13:22.987593 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/ceilometer-notification-agent/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.001672 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/sg-core/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.016199 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/proxy-httpd/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.033511 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_05319583-8c6d-43a9-88b6-1cba9781f85b/cinder-api-log/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.092560 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_05319583-8c6d-43a9-88b6-1cba9781f85b/cinder-api/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.147820 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_808833e1-7e58-4b7e-a1bb-ff5cc72b5b35/cinder-scheduler/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.187623 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_808833e1-7e58-4b7e-a1bb-ff5cc72b5b35/probe/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.255656 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj_ab573651-bad0-413d-9c16-46aac4818b9b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.314934 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz_7ded4dd1-51b6-427d-8f8f-44da3828ef6b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:23 crc kubenswrapper[4892]: I0122 10:13:23.429725 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-nt7cg_e4d2f9f5-3308-487a-871d-b411f6951ead/dnsmasq-dns/0.log" Jan 22 10:13:24 crc kubenswrapper[4892]: I0122 10:13:24.373843 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-nt7cg_e4d2f9f5-3308-487a-871d-b411f6951ead/init/0.log" Jan 22 10:13:24 crc kubenswrapper[4892]: I0122 10:13:24.474914 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-65kt2_9cd3e716-8070-42ec-87ad-4fc03fe2be23/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:24 crc kubenswrapper[4892]: I0122 10:13:24.491062 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e16d2673-ef7d-40c6-b1ae-c43fc8771d30/glance-log/0.log" Jan 22 10:13:24 crc kubenswrapper[4892]: I0122 10:13:24.509714 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e16d2673-ef7d-40c6-b1ae-c43fc8771d30/glance-httpd/0.log" Jan 22 10:13:24 crc kubenswrapper[4892]: I0122 10:13:24.522633 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0d04b37-82ff-4c76-ab88-4602d405c9e0/glance-log/0.log" Jan 22 10:13:24 crc kubenswrapper[4892]: I0122 10:13:24.551478 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0d04b37-82ff-4c76-ab88-4602d405c9e0/glance-httpd/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.008602 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bd8749ddb-x9h4l_a434b179-017a-4112-a673-1859114a62ed/horizon-log/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.133120 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bd8749ddb-x9h4l_a434b179-017a-4112-a673-1859114a62ed/horizon/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.217490 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-276h4_1f22ae69-a8dd-4646-836d-d48376094ceb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.270556 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mdwhd_402a9581-6783-46e0-8147-2e443d9a0608/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.429057 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7647f5f4ff-hmkw9_6ab62f99-9658-4ad6-be05-4f0849b6d6d5/keystone-api/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.440577 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484601-vb56j_675953a2-7c44-4857-a9f6-47dcb2049507/keystone-cron/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.456591 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6948adf9-b332-4b21-82e2-444fc998ebe5/kube-state-metrics/0.log" Jan 22 10:13:25 crc kubenswrapper[4892]: I0122 10:13:25.493633 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg_38fc771d-608b-4a8e-a7ec-7cfa932abc41/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:31 crc kubenswrapper[4892]: I0122 10:13:31.426789 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:13:31 crc kubenswrapper[4892]: E0122 10:13:31.428349 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:13:34 crc kubenswrapper[4892]: I0122 10:13:34.717790 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" event={"ID":"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9","Type":"ContainerStarted","Data":"68701e2424da972b9172f8067f4474e6a635944cb8e264dae1c8df64b60dbe66"} Jan 22 10:13:34 crc kubenswrapper[4892]: I0122 10:13:34.742393 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" podStartSLOduration=1.511578433 podStartE2EDuration="14.742375761s" podCreationTimestamp="2026-01-22 10:13:20 +0000 UTC" firstStartedPulling="2026-01-22 10:13:20.827802614 +0000 UTC m=+3770.671881677" lastFinishedPulling="2026-01-22 10:13:34.058599942 +0000 UTC m=+3783.902679005" observedRunningTime="2026-01-22 10:13:34.739001866 +0000 UTC m=+3784.583080929" watchObservedRunningTime="2026-01-22 10:13:34.742375761 +0000 UTC m=+3784.586454824" Jan 22 10:13:42 crc kubenswrapper[4892]: I0122 10:13:42.151802 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d1581ad3-031b-451b-a8a7-bea327cf4ecd/memcached/0.log" Jan 22 10:13:42 crc kubenswrapper[4892]: I0122 10:13:42.258413 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b7dcc6b6f-vkw7t_6d80b524-788b-4fdf-b8bf-28ae522512e1/neutron-api/0.log" Jan 22 10:13:42 crc kubenswrapper[4892]: I0122 10:13:42.320942 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b7dcc6b6f-vkw7t_6d80b524-788b-4fdf-b8bf-28ae522512e1/neutron-httpd/0.log" Jan 22 10:13:42 crc kubenswrapper[4892]: I0122 10:13:42.346220 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq_e8f16545-12e1-4084-84f3-a3598a939eaf/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:42 crc kubenswrapper[4892]: I0122 10:13:42.593794 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fb0b7944-3391-4c47-91a6-47c3aa62442a/nova-api-log/0.log" Jan 22 10:13:42 crc kubenswrapper[4892]: I0122 10:13:42.992636 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fb0b7944-3391-4c47-91a6-47c3aa62442a/nova-api-api/0.log" Jan 22 10:13:43 crc kubenswrapper[4892]: I0122 10:13:43.103803 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_67407100-b6b3-4802-9bb1-337db9cbb3e6/nova-cell0-conductor-conductor/0.log" Jan 22 10:13:43 crc kubenswrapper[4892]: I0122 10:13:43.195758 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c1860910-1d6f-45fc-b0ce-7aef22083de7/nova-cell1-conductor-conductor/0.log" Jan 22 10:13:43 crc kubenswrapper[4892]: I0122 10:13:43.268799 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_41e0d5e8-eee8-4d06-ae1f-fec66e793078/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 10:13:43 crc kubenswrapper[4892]: I0122 10:13:43.326885 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vnhgw_12c8d866-32b3-4952-bffa-4993dd9dede1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:43 crc kubenswrapper[4892]: I0122 10:13:43.412578 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_51bca3ef-0b5c-4c51-bf42-95ad11eba3be/nova-metadata-log/0.log" Jan 22 10:13:43 crc kubenswrapper[4892]: I0122 10:13:43.419034 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:13:43 crc kubenswrapper[4892]: E0122 10:13:43.419529 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.549712 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_51bca3ef-0b5c-4c51-bf42-95ad11eba3be/nova-metadata-metadata/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.655937 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/controller/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.670155 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/kube-rbac-proxy/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.692744 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/controller/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.698297 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_27216438-d79d-4606-8ac6-6636fc9b6e06/nova-scheduler-scheduler/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.736107 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5dcc844d-f681-4c5c-acb5-0edc57e32a0f/galera/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.754374 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5dcc844d-f681-4c5c-acb5-0edc57e32a0f/mysql-bootstrap/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.785132 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa34c3fd-3e21-49ac-becd-283928666ff2/galera/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.801263 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa34c3fd-3e21-49ac-becd-283928666ff2/mysql-bootstrap/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.811035 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d9a4d1e6-4981-477c-b2cf-8a132de2c1d9/openstackclient/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.830040 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-f8rll_91fb6665-4bf4-4558-abf7-788627c34a1c/ovn-controller/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.840531 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jh6lw_c16202bf-0e93-4bb8-96fb-cf6537ea21e6/openstack-network-exporter/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.852692 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snr9q_2d7ca514-734a-4ab1-890f-b04a1549c073/ovsdb-server/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.867350 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snr9q_2d7ca514-734a-4ab1-890f-b04a1549c073/ovs-vswitchd/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.875404 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snr9q_2d7ca514-734a-4ab1-890f-b04a1549c073/ovsdb-server-init/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.919235 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cqxm8_387b75ce-f980-4a8c-a230-15522ca7b923/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.943620 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_90d16687-ebb2-43f7-bdf4-04334f5895d7/ovn-northd/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.952786 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_90d16687-ebb2-43f7-bdf4-04334f5895d7/openstack-network-exporter/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.979135 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74af166d-c2f0-43b1-a516-e1d393e873b4/ovsdbserver-nb/0.log" Jan 22 10:13:44 crc kubenswrapper[4892]: I0122 10:13:44.987195 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74af166d-c2f0-43b1-a516-e1d393e873b4/openstack-network-exporter/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.005715 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ed3e46d9-e9ca-453a-92a3-a07471597296/ovsdbserver-sb/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.011978 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ed3e46d9-e9ca-453a-92a3-a07471597296/openstack-network-exporter/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.114936 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cff6669d-x8cnv_fa434e36-332b-401e-99b3-2dcb7d75da94/placement-log/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.190712 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cff6669d-x8cnv_fa434e36-332b-401e-99b3-2dcb7d75da94/placement-api/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.217559 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57552917-a09b-4f52-96b5-c7749b9af779/rabbitmq/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.224551 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57552917-a09b-4f52-96b5-c7749b9af779/setup-container/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.261081 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_30fa58bc-46e3-40c4-ad73-3f2e1f8341dd/rabbitmq/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.266757 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_30fa58bc-46e3-40c4-ad73-3f2e1f8341dd/setup-container/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.282512 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54_cc609164-f9fe-4caf-ae10-ed043d1091fe/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.296941 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-48xvm_4316ad67-9810-4253-bcfb-faa1b9936429/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.311308 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5_8bb695bf-11e7-478a-a348-2a06ef0bcdaf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.330915 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8d76w_fac9a973-588e-43e5-b6d1-530127ccccad/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.344059 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-l6df8_9486c77a-626c-488a-a958-d717027e31db/ssh-known-hosts-edpm-deployment/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.498721 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-674547b56f-gvjxm_accdf866-14d0-4308-a8d7-c598fde46122/proxy-httpd/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.519777 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-674547b56f-gvjxm_accdf866-14d0-4308-a8d7-c598fde46122/proxy-server/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.529136 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x9lwk_b23a9c04-b07c-4dd1-a475-7b1d70b9bddc/swift-ring-rebalance/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.572691 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-server/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.613447 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-replicator/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.622916 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-auditor/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.633277 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-reaper/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.640779 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-server/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.674155 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-replicator/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.681065 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-auditor/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.692532 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-updater/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.703485 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-server/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.739472 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-replicator/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.765736 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-auditor/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.775826 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-updater/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.784162 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-expirer/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.790815 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/rsync/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.798302 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/swift-recon-cron/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.921206 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w_cdb08ec6-d82f-4ea7-b6af-170f51b46949/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.958061 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_13171535-bfb7-4114-884d-b9b031615de3/tempest-tests-tempest-tests-runner/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.966056 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6efa520d-5ef8-49b9-b90f-197efdf100ed/test-operator-logs-container/0.log" Jan 22 10:13:45 crc kubenswrapper[4892]: I0122 10:13:45.988898 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t_49d64b56-37f0-45f2-8aec-a3dfbf171f09/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.513821 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.567913 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/reloader/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.575754 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr-metrics/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.588961 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.603571 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy-frr/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.610395 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-frr-files/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.636828 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-reloader/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.643448 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-metrics/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.651673 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-drtv6_667e6efb-6488-461d-8e5f-380e05c4956e/frr-k8s-webhook-server/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.676460 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6744fff56c-5c2wg_e2e7c48f-6e23-4156-b679-30f2d9735501/manager/0.log" Jan 22 10:13:46 crc kubenswrapper[4892]: I0122 10:13:46.688995 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bdbd58466-bwr22_b49a3e83-8e00-4934-8968-97d1905959d0/webhook-server/0.log" Jan 22 10:13:47 crc kubenswrapper[4892]: I0122 10:13:47.001781 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/speaker/0.log" Jan 22 10:13:47 crc kubenswrapper[4892]: I0122 10:13:47.011096 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/kube-rbac-proxy/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.376527 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-mcfls_f7ec268a-c82e-455e-b4b9-d0f96998c015/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.422499 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-fnrjr_815dba39-30ed-4471-bf04-ecc573373016/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.435006 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sx9p8_c020c33f-f12c-47ce-9639-c0069dff8bc4/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.449213 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/extract/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.455389 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/util/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.463901 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/pull/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.557746 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9lqvx_2047bcfa-42e4-4e81-b2c9-47f4a876ea84/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.568153 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b9v4x_fcd15b84-585b-4984-9c1f-26a6c585ada4/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.600005 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wkmzq_c9a77485-9340-433e-8bf6-cd47551438a9/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.873954 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-25z65_4f507c71-c9ab-4398-b25a-b6070d41f2b7/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.887485 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dcjs4_186e1123-d674-468b-91c1-92eb6bca4a30/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.946367 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vm28p_361a2cfd-62a4-40cc-b85c-7e81e6adb91d/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.958552 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-67mcr_f942aff3-65c5-4507-af71-0e4596abc4cf/manager/0.log" Jan 22 10:13:50 crc kubenswrapper[4892]: I0122 10:13:50.998643 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-4ldkj_fd035f9e-2587-4286-85d9-db7c209970de/manager/0.log" Jan 22 10:13:51 crc kubenswrapper[4892]: I0122 10:13:51.044263 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-dvlzw_928d4875-5da0-47ce-a68d-99fed2b7edce/manager/0.log" Jan 22 10:13:51 crc kubenswrapper[4892]: I0122 10:13:51.127339 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-pkbln_8a19ffda-db08-44ec-bc17-d70c74f9552e/manager/0.log" Jan 22 10:13:51 crc kubenswrapper[4892]: I0122 10:13:51.138703 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-sjml2_43ab3264-2c0d-44a8-ab85-66efc360bf67/manager/0.log" Jan 22 10:13:51 crc kubenswrapper[4892]: I0122 10:13:51.160458 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr_c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea/manager/0.log" Jan 22 10:13:51 crc kubenswrapper[4892]: I0122 10:13:51.292100 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-698d6bb84b-sckbn_bf11bbca-62bd-4421-b0be-a62f87a6d600/operator/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.499301 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-788c8b99b5-cws6m_7b2bb8eb-1122-4141-a4ed-c3d316c8b821/manager/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.562070 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hq2gz_016ec7ec-1244-47ab-81ba-957ed4b83b4f/registry-server/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.617645 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9htzp_4ce3456e-dba6-498d-bf5a-aef2832489fe/manager/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.644819 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-hf9ft_e23d3dd6-bce9-496f-840b-0bbd3017826f/manager/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.665207 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hkmzg_7be69e64-d272-47f2-933a-4925c0aad02c/operator/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.695503 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gfcjl_f7dcb7b0-0580-4aff-8770-377761a44f88/manager/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.762949 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-2n9gl_062ff35c-ceb7-44b0-a2ef-1d79a14a444c/manager/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.773895 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2tb_be68c0da-a0d9-463c-be32-6191b85ae620/manager/0.log" Jan 22 10:13:52 crc kubenswrapper[4892]: I0122 10:13:52.783736 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-xq8jw_b6638ff5-13e6-44b1-8711-0c775882282f/manager/0.log" Jan 22 10:13:57 crc kubenswrapper[4892]: I0122 10:13:57.030567 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wf6dw_3a01b910-5841-4f20-b270-c7040213ac8d/control-plane-machine-set-operator/0.log" Jan 22 10:13:57 crc kubenswrapper[4892]: I0122 10:13:57.044181 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/kube-rbac-proxy/0.log" Jan 22 10:13:57 crc kubenswrapper[4892]: I0122 10:13:57.053929 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/machine-api-operator/0.log" Jan 22 10:13:58 crc kubenswrapper[4892]: I0122 10:13:58.418545 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:13:58 crc kubenswrapper[4892]: E0122 10:13:58.419097 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:14:12 crc kubenswrapper[4892]: I0122 10:14:12.092886 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" event={"ID":"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9","Type":"ContainerDied","Data":"68701e2424da972b9172f8067f4474e6a635944cb8e264dae1c8df64b60dbe66"} Jan 22 10:14:12 crc kubenswrapper[4892]: I0122 10:14:12.092828 4892 generic.go:334] "Generic (PLEG): container finished" podID="314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" containerID="68701e2424da972b9172f8067f4474e6a635944cb8e264dae1c8df64b60dbe66" exitCode=0 Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.233611 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.263257 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-gr9vt"] Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.271107 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-gr9vt"] Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.383975 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-host\") pod \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.384453 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rctv\" (UniqueName: \"kubernetes.io/projected/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-kube-api-access-8rctv\") pod \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\" (UID: \"314fff2a-87cd-4542-b9b7-8ce3a3aec0e9\") " Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.384079 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-host" (OuterVolumeSpecName: "host") pod "314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" (UID: "314fff2a-87cd-4542-b9b7-8ce3a3aec0e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.385365 4892 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.391518 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-kube-api-access-8rctv" (OuterVolumeSpecName: "kube-api-access-8rctv") pod "314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" (UID: "314fff2a-87cd-4542-b9b7-8ce3a3aec0e9"). InnerVolumeSpecName "kube-api-access-8rctv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.419232 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:14:13 crc kubenswrapper[4892]: E0122 10:14:13.419605 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.434694 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" path="/var/lib/kubelet/pods/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9/volumes" Jan 22 10:14:13 crc kubenswrapper[4892]: I0122 10:14:13.487230 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rctv\" (UniqueName: \"kubernetes.io/projected/314fff2a-87cd-4542-b9b7-8ce3a3aec0e9-kube-api-access-8rctv\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.113183 4892 scope.go:117] "RemoveContainer" containerID="68701e2424da972b9172f8067f4474e6a635944cb8e264dae1c8df64b60dbe66" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.113389 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-gr9vt" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.428241 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-kngll"] Jan 22 10:14:14 crc kubenswrapper[4892]: E0122 10:14:14.428949 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" containerName="container-00" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.428962 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" containerName="container-00" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.429132 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="314fff2a-87cd-4542-b9b7-8ce3a3aec0e9" containerName="container-00" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.429730 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.608615 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-host\") pod \"crc-debug-kngll\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.608723 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzc7x\" (UniqueName: \"kubernetes.io/projected/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-kube-api-access-gzc7x\") pod \"crc-debug-kngll\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.711253 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-host\") pod \"crc-debug-kngll\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.711615 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzc7x\" (UniqueName: \"kubernetes.io/projected/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-kube-api-access-gzc7x\") pod \"crc-debug-kngll\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.711391 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-host\") pod \"crc-debug-kngll\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.729043 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzc7x\" (UniqueName: \"kubernetes.io/projected/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-kube-api-access-gzc7x\") pod \"crc-debug-kngll\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:14 crc kubenswrapper[4892]: I0122 10:14:14.754544 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:15 crc kubenswrapper[4892]: I0122 10:14:15.124511 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-kngll" event={"ID":"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509","Type":"ContainerStarted","Data":"4bf255052d7dac585723bd0c1ca74ffabace0a7d9ca8a5a85f476ca38c97f58d"} Jan 22 10:14:16 crc kubenswrapper[4892]: I0122 10:14:16.138269 4892 generic.go:334] "Generic (PLEG): container finished" podID="5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" containerID="5a7afe9fca1ea203517970aaf2cdce7ac39c4593d0a37171c1e2feea1ba3ba61" exitCode=0 Jan 22 10:14:16 crc kubenswrapper[4892]: I0122 10:14:16.138405 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-kngll" event={"ID":"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509","Type":"ContainerDied","Data":"5a7afe9fca1ea203517970aaf2cdce7ac39c4593d0a37171c1e2feea1ba3ba61"} Jan 22 10:14:16 crc kubenswrapper[4892]: I0122 10:14:16.585013 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-kngll"] Jan 22 10:14:16 crc kubenswrapper[4892]: I0122 10:14:16.595047 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-kngll"] Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.243993 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.279190 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzc7x\" (UniqueName: \"kubernetes.io/projected/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-kube-api-access-gzc7x\") pod \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.279389 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-host\") pod \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\" (UID: \"5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509\") " Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.279491 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-host" (OuterVolumeSpecName: "host") pod "5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" (UID: "5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.280056 4892 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.287542 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-kube-api-access-gzc7x" (OuterVolumeSpecName: "kube-api-access-gzc7x") pod "5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" (UID: "5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509"). InnerVolumeSpecName "kube-api-access-gzc7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.383480 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzc7x\" (UniqueName: \"kubernetes.io/projected/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509-kube-api-access-gzc7x\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.434924 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" path="/var/lib/kubelet/pods/5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509/volumes" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.762941 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-52bfm"] Jan 22 10:14:17 crc kubenswrapper[4892]: E0122 10:14:17.763701 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" containerName="container-00" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.763719 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" containerName="container-00" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.763924 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce6b0d9-6a8d-47b6-afe1-083d6dbd1509" containerName="container-00" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.764560 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.789818 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25tc\" (UniqueName: \"kubernetes.io/projected/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-kube-api-access-w25tc\") pod \"crc-debug-52bfm\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.789887 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-host\") pod \"crc-debug-52bfm\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.892119 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w25tc\" (UniqueName: \"kubernetes.io/projected/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-kube-api-access-w25tc\") pod \"crc-debug-52bfm\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.892430 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-host\") pod \"crc-debug-52bfm\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.892592 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-host\") pod \"crc-debug-52bfm\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:17 crc kubenswrapper[4892]: I0122 10:14:17.912100 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w25tc\" (UniqueName: \"kubernetes.io/projected/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-kube-api-access-w25tc\") pod \"crc-debug-52bfm\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:18 crc kubenswrapper[4892]: I0122 10:14:18.083884 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:18 crc kubenswrapper[4892]: W0122 10:14:18.125375 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1ff1ae5_8c34_494d_b685_f4f6fa36f042.slice/crio-3b05ae7472dbc0e6c86d76759d303bc51c8c8e838c16fecd29623554f7060c39 WatchSource:0}: Error finding container 3b05ae7472dbc0e6c86d76759d303bc51c8c8e838c16fecd29623554f7060c39: Status 404 returned error can't find the container with id 3b05ae7472dbc0e6c86d76759d303bc51c8c8e838c16fecd29623554f7060c39 Jan 22 10:14:18 crc kubenswrapper[4892]: I0122 10:14:18.159090 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-52bfm" event={"ID":"c1ff1ae5-8c34-494d-b685-f4f6fa36f042","Type":"ContainerStarted","Data":"3b05ae7472dbc0e6c86d76759d303bc51c8c8e838c16fecd29623554f7060c39"} Jan 22 10:14:18 crc kubenswrapper[4892]: I0122 10:14:18.162459 4892 scope.go:117] "RemoveContainer" containerID="5a7afe9fca1ea203517970aaf2cdce7ac39c4593d0a37171c1e2feea1ba3ba61" Jan 22 10:14:18 crc kubenswrapper[4892]: I0122 10:14:18.162500 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-kngll" Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.171480 4892 generic.go:334] "Generic (PLEG): container finished" podID="c1ff1ae5-8c34-494d-b685-f4f6fa36f042" containerID="381e03a94fa8c5176c2676a4b2b1d77f5564fd8e3f5b2b9783136294fc860f59" exitCode=0 Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.171824 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/crc-debug-52bfm" event={"ID":"c1ff1ae5-8c34-494d-b685-f4f6fa36f042","Type":"ContainerDied","Data":"381e03a94fa8c5176c2676a4b2b1d77f5564fd8e3f5b2b9783136294fc860f59"} Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.211370 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-52bfm"] Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.221375 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qgdnd/crc-debug-52bfm"] Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.763640 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8w6r_5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0/cert-manager-controller/0.log" Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.776431 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rzj7t_6035615e-d06d-45df-b927-9233155546ce/cert-manager-cainjector/0.log" Jan 22 10:14:19 crc kubenswrapper[4892]: I0122 10:14:19.787878 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rv9dl_fc56bdec-62b2-486e-84c5-363cc15c5cec/cert-manager-webhook/0.log" Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.275948 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.333101 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-host\") pod \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.333249 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-host" (OuterVolumeSpecName: "host") pod "c1ff1ae5-8c34-494d-b685-f4f6fa36f042" (UID: "c1ff1ae5-8c34-494d-b685-f4f6fa36f042"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.333418 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w25tc\" (UniqueName: \"kubernetes.io/projected/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-kube-api-access-w25tc\") pod \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\" (UID: \"c1ff1ae5-8c34-494d-b685-f4f6fa36f042\") " Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.334020 4892 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.339529 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-kube-api-access-w25tc" (OuterVolumeSpecName: "kube-api-access-w25tc") pod "c1ff1ae5-8c34-494d-b685-f4f6fa36f042" (UID: "c1ff1ae5-8c34-494d-b685-f4f6fa36f042"). InnerVolumeSpecName "kube-api-access-w25tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:20 crc kubenswrapper[4892]: I0122 10:14:20.434761 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w25tc\" (UniqueName: \"kubernetes.io/projected/c1ff1ae5-8c34-494d-b685-f4f6fa36f042-kube-api-access-w25tc\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:21 crc kubenswrapper[4892]: I0122 10:14:21.189608 4892 scope.go:117] "RemoveContainer" containerID="381e03a94fa8c5176c2676a4b2b1d77f5564fd8e3f5b2b9783136294fc860f59" Jan 22 10:14:21 crc kubenswrapper[4892]: I0122 10:14:21.189702 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/crc-debug-52bfm" Jan 22 10:14:21 crc kubenswrapper[4892]: I0122 10:14:21.428276 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ff1ae5-8c34-494d-b685-f4f6fa36f042" path="/var/lib/kubelet/pods/c1ff1ae5-8c34-494d-b685-f4f6fa36f042/volumes" Jan 22 10:14:24 crc kubenswrapper[4892]: I0122 10:14:24.419578 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:14:24 crc kubenswrapper[4892]: E0122 10:14:24.420035 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:14:25 crc kubenswrapper[4892]: I0122 10:14:25.196513 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5g6g2_d0e47195-84b2-4249-8f2e-833525b47d1c/nmstate-console-plugin/0.log" Jan 22 10:14:25 crc kubenswrapper[4892]: I0122 10:14:25.214314 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c4k72_76d51902-a31d-4cfd-aa0a-de6c055c79fd/nmstate-handler/0.log" Jan 22 10:14:25 crc kubenswrapper[4892]: I0122 10:14:25.223463 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/nmstate-metrics/0.log" Jan 22 10:14:25 crc kubenswrapper[4892]: I0122 10:14:25.231631 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/kube-rbac-proxy/0.log" Jan 22 10:14:25 crc kubenswrapper[4892]: I0122 10:14:25.244102 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-blljf_ebf0d927-7aa3-4f75-b5be-7037df253175/nmstate-operator/0.log" Jan 22 10:14:25 crc kubenswrapper[4892]: I0122 10:14:25.257464 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t6lmb_21249177-5044-4f9b-a0dc-dcad499ec3ad/nmstate-webhook/0.log" Jan 22 10:14:36 crc kubenswrapper[4892]: I0122 10:14:36.177674 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/controller/0.log" Jan 22 10:14:36 crc kubenswrapper[4892]: I0122 10:14:36.184466 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/kube-rbac-proxy/0.log" Jan 22 10:14:36 crc kubenswrapper[4892]: I0122 10:14:36.206133 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/controller/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.638956 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.648327 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/reloader/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.655729 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr-metrics/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.663537 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.672882 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy-frr/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.680279 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-frr-files/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.686951 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-reloader/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.693707 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-metrics/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.705332 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-drtv6_667e6efb-6488-461d-8e5f-380e05c4956e/frr-k8s-webhook-server/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.725826 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6744fff56c-5c2wg_e2e7c48f-6e23-4156-b679-30f2d9735501/manager/0.log" Jan 22 10:14:37 crc kubenswrapper[4892]: I0122 10:14:37.735765 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bdbd58466-bwr22_b49a3e83-8e00-4934-8968-97d1905959d0/webhook-server/0.log" Jan 22 10:14:38 crc kubenswrapper[4892]: I0122 10:14:38.069563 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/speaker/0.log" Jan 22 10:14:38 crc kubenswrapper[4892]: I0122 10:14:38.079537 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/kube-rbac-proxy/0.log" Jan 22 10:14:39 crc kubenswrapper[4892]: I0122 10:14:39.419153 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:14:39 crc kubenswrapper[4892]: E0122 10:14:39.419793 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.095794 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8_fe89e20a-62fc-4d26-ae68-73810243a106/extract/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.102580 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8_fe89e20a-62fc-4d26-ae68-73810243a106/util/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.108929 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8_fe89e20a-62fc-4d26-ae68-73810243a106/pull/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.125389 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn_8c2da807-7b14-4384-bf1a-dcfad84a6a14/extract/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.134247 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn_8c2da807-7b14-4384-bf1a-dcfad84a6a14/util/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.143203 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn_8c2da807-7b14-4384-bf1a-dcfad84a6a14/pull/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.419611 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75cn7_0e4feacd-2fae-4242-81a6-2de47aca5dd7/registry-server/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.425440 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75cn7_0e4feacd-2fae-4242-81a6-2de47aca5dd7/extract-utilities/0.log" Jan 22 10:14:42 crc kubenswrapper[4892]: I0122 10:14:42.434209 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75cn7_0e4feacd-2fae-4242-81a6-2de47aca5dd7/extract-content/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.067567 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrbh2_49114a09-ac3a-4dbd-99f1-26543fbf5dcf/registry-server/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.073378 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrbh2_49114a09-ac3a-4dbd-99f1-26543fbf5dcf/extract-utilities/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.081642 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrbh2_49114a09-ac3a-4dbd-99f1-26543fbf5dcf/extract-content/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.097862 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xt4nm_02e012df-582c-41ec-9c63-ff6dd7cc08c6/marketplace-operator/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.235938 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7pjdw_1c53bdc3-44ab-4be2-9f83-2d241776a337/registry-server/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.240827 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7pjdw_1c53bdc3-44ab-4be2-9f83-2d241776a337/extract-utilities/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.248489 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7pjdw_1c53bdc3-44ab-4be2-9f83-2d241776a337/extract-content/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.918275 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pf4pl_91b13dd5-7aad-496a-8138-9a9e638a0a01/registry-server/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.923927 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pf4pl_91b13dd5-7aad-496a-8138-9a9e638a0a01/extract-utilities/0.log" Jan 22 10:14:43 crc kubenswrapper[4892]: I0122 10:14:43.930146 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pf4pl_91b13dd5-7aad-496a-8138-9a9e638a0a01/extract-content/0.log" Jan 22 10:14:50 crc kubenswrapper[4892]: I0122 10:14:50.418622 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:14:50 crc kubenswrapper[4892]: E0122 10:14:50.419389 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.185775 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww"] Jan 22 10:15:00 crc kubenswrapper[4892]: E0122 10:15:00.188024 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ff1ae5-8c34-494d-b685-f4f6fa36f042" containerName="container-00" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.188147 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ff1ae5-8c34-494d-b685-f4f6fa36f042" containerName="container-00" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.188492 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ff1ae5-8c34-494d-b685-f4f6fa36f042" containerName="container-00" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.189399 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.196157 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww"] Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.198649 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.198945 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.367312 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcd0da9a-d4a8-42e0-876f-334f488a0885-secret-volume\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.367648 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd0da9a-d4a8-42e0-876f-334f488a0885-config-volume\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.367774 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vs6q\" (UniqueName: \"kubernetes.io/projected/bcd0da9a-d4a8-42e0-876f-334f488a0885-kube-api-access-9vs6q\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.469269 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcd0da9a-d4a8-42e0-876f-334f488a0885-secret-volume\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.469414 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd0da9a-d4a8-42e0-876f-334f488a0885-config-volume\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.469462 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vs6q\" (UniqueName: \"kubernetes.io/projected/bcd0da9a-d4a8-42e0-876f-334f488a0885-kube-api-access-9vs6q\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.470483 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd0da9a-d4a8-42e0-876f-334f488a0885-config-volume\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.478739 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcd0da9a-d4a8-42e0-876f-334f488a0885-secret-volume\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.486045 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vs6q\" (UniqueName: \"kubernetes.io/projected/bcd0da9a-d4a8-42e0-876f-334f488a0885-kube-api-access-9vs6q\") pod \"collect-profiles-29484615-mqjww\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:00 crc kubenswrapper[4892]: I0122 10:15:00.523446 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:01 crc kubenswrapper[4892]: I0122 10:15:01.003765 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww"] Jan 22 10:15:01 crc kubenswrapper[4892]: I0122 10:15:01.509589 4892 generic.go:334] "Generic (PLEG): container finished" podID="bcd0da9a-d4a8-42e0-876f-334f488a0885" containerID="55457b34c74a17005d896e4beb8c09c5b73a653fb143b2054e6c39898220386c" exitCode=0 Jan 22 10:15:01 crc kubenswrapper[4892]: I0122 10:15:01.509658 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" event={"ID":"bcd0da9a-d4a8-42e0-876f-334f488a0885","Type":"ContainerDied","Data":"55457b34c74a17005d896e4beb8c09c5b73a653fb143b2054e6c39898220386c"} Jan 22 10:15:01 crc kubenswrapper[4892]: I0122 10:15:01.509939 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" event={"ID":"bcd0da9a-d4a8-42e0-876f-334f488a0885","Type":"ContainerStarted","Data":"215f917df5ba46a20b1dd893820e0a25d4dea112579fa04466daceb696a334bf"} Jan 22 10:15:02 crc kubenswrapper[4892]: I0122 10:15:02.861813 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.031735 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vs6q\" (UniqueName: \"kubernetes.io/projected/bcd0da9a-d4a8-42e0-876f-334f488a0885-kube-api-access-9vs6q\") pod \"bcd0da9a-d4a8-42e0-876f-334f488a0885\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.032128 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcd0da9a-d4a8-42e0-876f-334f488a0885-secret-volume\") pod \"bcd0da9a-d4a8-42e0-876f-334f488a0885\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.032829 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd0da9a-d4a8-42e0-876f-334f488a0885-config-volume\") pod \"bcd0da9a-d4a8-42e0-876f-334f488a0885\" (UID: \"bcd0da9a-d4a8-42e0-876f-334f488a0885\") " Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.033461 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd0da9a-d4a8-42e0-876f-334f488a0885-config-volume" (OuterVolumeSpecName: "config-volume") pod "bcd0da9a-d4a8-42e0-876f-334f488a0885" (UID: "bcd0da9a-d4a8-42e0-876f-334f488a0885"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.037452 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd0da9a-d4a8-42e0-876f-334f488a0885-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bcd0da9a-d4a8-42e0-876f-334f488a0885" (UID: "bcd0da9a-d4a8-42e0-876f-334f488a0885"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.037585 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd0da9a-d4a8-42e0-876f-334f488a0885-kube-api-access-9vs6q" (OuterVolumeSpecName: "kube-api-access-9vs6q") pod "bcd0da9a-d4a8-42e0-876f-334f488a0885" (UID: "bcd0da9a-d4a8-42e0-876f-334f488a0885"). InnerVolumeSpecName "kube-api-access-9vs6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.136801 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd0da9a-d4a8-42e0-876f-334f488a0885-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.136837 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vs6q\" (UniqueName: \"kubernetes.io/projected/bcd0da9a-d4a8-42e0-876f-334f488a0885-kube-api-access-9vs6q\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.136849 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcd0da9a-d4a8-42e0-876f-334f488a0885-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.526663 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" event={"ID":"bcd0da9a-d4a8-42e0-876f-334f488a0885","Type":"ContainerDied","Data":"215f917df5ba46a20b1dd893820e0a25d4dea112579fa04466daceb696a334bf"} Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.526705 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215f917df5ba46a20b1dd893820e0a25d4dea112579fa04466daceb696a334bf" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.526753 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-mqjww" Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.936345 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9"] Jan 22 10:15:03 crc kubenswrapper[4892]: I0122 10:15:03.945234 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-4r2f9"] Jan 22 10:15:05 crc kubenswrapper[4892]: I0122 10:15:05.418772 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:15:05 crc kubenswrapper[4892]: E0122 10:15:05.419079 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:15:05 crc kubenswrapper[4892]: I0122 10:15:05.431827 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a7e907-5a2d-4c67-939b-c27548a17903" path="/var/lib/kubelet/pods/69a7e907-5a2d-4c67-939b-c27548a17903/volumes" Jan 22 10:15:16 crc kubenswrapper[4892]: I0122 10:15:16.418839 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:15:16 crc kubenswrapper[4892]: E0122 10:15:16.419707 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:15:27 crc kubenswrapper[4892]: I0122 10:15:27.419026 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:15:27 crc kubenswrapper[4892]: E0122 10:15:27.419672 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:15:39 crc kubenswrapper[4892]: I0122 10:15:39.418649 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:15:39 crc kubenswrapper[4892]: E0122 10:15:39.419500 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:15:41 crc kubenswrapper[4892]: I0122 10:15:41.650055 4892 scope.go:117] "RemoveContainer" containerID="98b85a9e524dd176269e69c674cdb3658f15b63d24bce2c41933041a87fa6ba0" Jan 22 10:15:53 crc kubenswrapper[4892]: I0122 10:15:53.438232 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:15:53 crc kubenswrapper[4892]: E0122 10:15:53.439020 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:15:56 crc kubenswrapper[4892]: I0122 10:15:56.140864 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/controller/0.log" Jan 22 10:15:56 crc kubenswrapper[4892]: I0122 10:15:56.146405 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/kube-rbac-proxy/0.log" Jan 22 10:15:56 crc kubenswrapper[4892]: I0122 10:15:56.170505 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/controller/0.log" Jan 22 10:15:56 crc kubenswrapper[4892]: I0122 10:15:56.342009 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8w6r_5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0/cert-manager-controller/0.log" Jan 22 10:15:56 crc kubenswrapper[4892]: I0122 10:15:56.364100 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rzj7t_6035615e-d06d-45df-b927-9233155546ce/cert-manager-cainjector/0.log" Jan 22 10:15:56 crc kubenswrapper[4892]: I0122 10:15:56.381734 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rv9dl_fc56bdec-62b2-486e-84c5-363cc15c5cec/cert-manager-webhook/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.472620 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-mcfls_f7ec268a-c82e-455e-b4b9-d0f96998c015/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.538951 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-fnrjr_815dba39-30ed-4471-bf04-ecc573373016/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.560012 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sx9p8_c020c33f-f12c-47ce-9639-c0069dff8bc4/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.574591 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/extract/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.581207 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/util/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.600467 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/pull/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.761898 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.765015 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9lqvx_2047bcfa-42e4-4e81-b2c9-47f4a876ea84/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.779152 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/reloader/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.783783 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b9v4x_fcd15b84-585b-4984-9c1f-26a6c585ada4/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.787562 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr-metrics/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.797266 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.806139 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wkmzq_c9a77485-9340-433e-8bf6-cd47551438a9/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.813112 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy-frr/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.822312 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-frr-files/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.832146 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-reloader/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.844404 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-metrics/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.858417 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-drtv6_667e6efb-6488-461d-8e5f-380e05c4956e/frr-k8s-webhook-server/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.885516 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6744fff56c-5c2wg_e2e7c48f-6e23-4156-b679-30f2d9735501/manager/0.log" Jan 22 10:15:57 crc kubenswrapper[4892]: I0122 10:15:57.897816 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bdbd58466-bwr22_b49a3e83-8e00-4934-8968-97d1905959d0/webhook-server/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.201194 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-25z65_4f507c71-c9ab-4398-b25a-b6070d41f2b7/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.219315 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dcjs4_186e1123-d674-468b-91c1-92eb6bca4a30/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.334128 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vm28p_361a2cfd-62a4-40cc-b85c-7e81e6adb91d/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.348964 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-67mcr_f942aff3-65c5-4507-af71-0e4596abc4cf/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.362137 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/speaker/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.372851 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/kube-rbac-proxy/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.394168 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-4ldkj_fd035f9e-2587-4286-85d9-db7c209970de/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.448325 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-dvlzw_928d4875-5da0-47ce-a68d-99fed2b7edce/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.531638 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-pkbln_8a19ffda-db08-44ec-bc17-d70c74f9552e/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.542693 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-sjml2_43ab3264-2c0d-44a8-ab85-66efc360bf67/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.560896 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr_c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea/manager/0.log" Jan 22 10:15:58 crc kubenswrapper[4892]: I0122 10:15:58.685342 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-698d6bb84b-sckbn_bf11bbca-62bd-4421-b0be-a62f87a6d600/operator/0.log" Jan 22 10:15:59 crc kubenswrapper[4892]: I0122 10:15:59.605979 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8w6r_5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0/cert-manager-controller/0.log" Jan 22 10:15:59 crc kubenswrapper[4892]: I0122 10:15:59.620263 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rzj7t_6035615e-d06d-45df-b927-9233155546ce/cert-manager-cainjector/0.log" Jan 22 10:15:59 crc kubenswrapper[4892]: I0122 10:15:59.629293 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rv9dl_fc56bdec-62b2-486e-84c5-363cc15c5cec/cert-manager-webhook/0.log" Jan 22 10:15:59 crc kubenswrapper[4892]: I0122 10:15:59.975800 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-788c8b99b5-cws6m_7b2bb8eb-1122-4141-a4ed-c3d316c8b821/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.040481 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hq2gz_016ec7ec-1244-47ab-81ba-957ed4b83b4f/registry-server/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.090602 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9htzp_4ce3456e-dba6-498d-bf5a-aef2832489fe/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.114269 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-hf9ft_e23d3dd6-bce9-496f-840b-0bbd3017826f/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.135777 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hkmzg_7be69e64-d272-47f2-933a-4925c0aad02c/operator/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.163369 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gfcjl_f7dcb7b0-0580-4aff-8770-377761a44f88/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.233104 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-2n9gl_062ff35c-ceb7-44b0-a2ef-1d79a14a444c/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.253881 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2tb_be68c0da-a0d9-463c-be32-6191b85ae620/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.266257 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-xq8jw_b6638ff5-13e6-44b1-8711-0c775882282f/manager/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.363436 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wf6dw_3a01b910-5841-4f20-b270-c7040213ac8d/control-plane-machine-set-operator/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.374204 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/kube-rbac-proxy/0.log" Jan 22 10:16:00 crc kubenswrapper[4892]: I0122 10:16:00.382501 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/machine-api-operator/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.095302 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-mcfls_f7ec268a-c82e-455e-b4b9-d0f96998c015/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.131901 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-fnrjr_815dba39-30ed-4471-bf04-ecc573373016/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.147324 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sx9p8_c020c33f-f12c-47ce-9639-c0069dff8bc4/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.156383 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/extract/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.163688 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/util/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.171589 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/pull/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.291998 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9lqvx_2047bcfa-42e4-4e81-b2c9-47f4a876ea84/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.303714 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b9v4x_fcd15b84-585b-4984-9c1f-26a6c585ada4/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.338059 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wkmzq_c9a77485-9340-433e-8bf6-cd47551438a9/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.567021 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5g6g2_d0e47195-84b2-4249-8f2e-833525b47d1c/nmstate-console-plugin/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.584634 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c4k72_76d51902-a31d-4cfd-aa0a-de6c055c79fd/nmstate-handler/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.598671 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/nmstate-metrics/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.606560 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/kube-rbac-proxy/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.621628 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-blljf_ebf0d927-7aa3-4f75-b5be-7037df253175/nmstate-operator/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.634031 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t6lmb_21249177-5044-4f9b-a0dc-dcad499ec3ad/nmstate-webhook/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.636739 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-25z65_4f507c71-c9ab-4398-b25a-b6070d41f2b7/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.647160 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dcjs4_186e1123-d674-468b-91c1-92eb6bca4a30/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.710836 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vm28p_361a2cfd-62a4-40cc-b85c-7e81e6adb91d/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.720189 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-67mcr_f942aff3-65c5-4507-af71-0e4596abc4cf/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.755769 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-4ldkj_fd035f9e-2587-4286-85d9-db7c209970de/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.823034 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-dvlzw_928d4875-5da0-47ce-a68d-99fed2b7edce/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.927778 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-pkbln_8a19ffda-db08-44ec-bc17-d70c74f9552e/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.938647 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-sjml2_43ab3264-2c0d-44a8-ab85-66efc360bf67/manager/0.log" Jan 22 10:16:01 crc kubenswrapper[4892]: I0122 10:16:01.959244 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr_c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea/manager/0.log" Jan 22 10:16:02 crc kubenswrapper[4892]: I0122 10:16:02.094430 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-698d6bb84b-sckbn_bf11bbca-62bd-4421-b0be-a62f87a6d600/operator/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.356658 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-788c8b99b5-cws6m_7b2bb8eb-1122-4141-a4ed-c3d316c8b821/manager/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.429920 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hq2gz_016ec7ec-1244-47ab-81ba-957ed4b83b4f/registry-server/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.472481 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9htzp_4ce3456e-dba6-498d-bf5a-aef2832489fe/manager/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.503587 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-hf9ft_e23d3dd6-bce9-496f-840b-0bbd3017826f/manager/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.528663 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hkmzg_7be69e64-d272-47f2-933a-4925c0aad02c/operator/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.553115 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gfcjl_f7dcb7b0-0580-4aff-8770-377761a44f88/manager/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.625053 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-2n9gl_062ff35c-ceb7-44b0-a2ef-1d79a14a444c/manager/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.638723 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2tb_be68c0da-a0d9-463c-be32-6191b85ae620/manager/0.log" Jan 22 10:16:03 crc kubenswrapper[4892]: I0122 10:16:03.651256 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-xq8jw_b6638ff5-13e6-44b1-8711-0c775882282f/manager/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.382059 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/kube-multus-additional-cni-plugins/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.390424 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/egress-router-binary-copy/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.396994 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/cni-plugins/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.405046 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/bond-cni-plugin/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.414515 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/routeoverride-cni/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.418421 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:16:05 crc kubenswrapper[4892]: E0122 10:16:05.418657 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.426519 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/whereabouts-cni-bincopy/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.437004 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/whereabouts-cni/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.468039 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-86wr5_23125b22-0965-46a8-a698-dc256f032b3c/multus-admission-controller/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.474816 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-86wr5_23125b22-0965-46a8-a698-dc256f032b3c/kube-rbac-proxy/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.554412 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/2.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.628928 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/3.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.667908 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5nnld_f7391f43-09a9-4333-8df2-72d4fdc02615/network-metrics-daemon/0.log" Jan 22 10:16:05 crc kubenswrapper[4892]: I0122 10:16:05.676172 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5nnld_f7391f43-09a9-4333-8df2-72d4fdc02615/kube-rbac-proxy/0.log" Jan 22 10:16:19 crc kubenswrapper[4892]: I0122 10:16:19.419585 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:16:19 crc kubenswrapper[4892]: E0122 10:16:19.420463 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:16:31 crc kubenswrapper[4892]: I0122 10:16:31.426213 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:16:31 crc kubenswrapper[4892]: E0122 10:16:31.426885 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:16:44 crc kubenswrapper[4892]: I0122 10:16:44.418611 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:16:44 crc kubenswrapper[4892]: E0122 10:16:44.419446 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:16:55 crc kubenswrapper[4892]: I0122 10:16:55.420041 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:16:55 crc kubenswrapper[4892]: E0122 10:16:55.421819 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:17:08 crc kubenswrapper[4892]: I0122 10:17:08.419791 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:17:08 crc kubenswrapper[4892]: E0122 10:17:08.420620 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:17:20 crc kubenswrapper[4892]: I0122 10:17:20.418389 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:17:20 crc kubenswrapper[4892]: I0122 10:17:20.673161 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"cec64c8c58561c336f39be908fd134c1f2197687f00578cd16620431d59c86d4"} Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.220745 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkbf"] Jan 22 10:19:01 crc kubenswrapper[4892]: E0122 10:19:01.221692 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd0da9a-d4a8-42e0-876f-334f488a0885" containerName="collect-profiles" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.221705 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd0da9a-d4a8-42e0-876f-334f488a0885" containerName="collect-profiles" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.221873 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd0da9a-d4a8-42e0-876f-334f488a0885" containerName="collect-profiles" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.223112 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.293967 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkbf"] Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.381815 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-catalog-content\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.382004 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6ml\" (UniqueName: \"kubernetes.io/projected/09cc4779-e17f-49e1-8c6b-e27cc577befa-kube-api-access-dm6ml\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.382075 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-utilities\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.484042 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6ml\" (UniqueName: \"kubernetes.io/projected/09cc4779-e17f-49e1-8c6b-e27cc577befa-kube-api-access-dm6ml\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.484324 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-utilities\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.484430 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-catalog-content\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.485236 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-utilities\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.485560 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-catalog-content\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.503858 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6ml\" (UniqueName: \"kubernetes.io/projected/09cc4779-e17f-49e1-8c6b-e27cc577befa-kube-api-access-dm6ml\") pod \"redhat-marketplace-gxkbf\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:01 crc kubenswrapper[4892]: I0122 10:19:01.547502 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:02 crc kubenswrapper[4892]: I0122 10:19:02.110239 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkbf"] Jan 22 10:19:02 crc kubenswrapper[4892]: I0122 10:19:02.625967 4892 generic.go:334] "Generic (PLEG): container finished" podID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerID="03178ac2874f4f81adffcb762498ec47e23410877fb545527485b9b030a94b7b" exitCode=0 Jan 22 10:19:02 crc kubenswrapper[4892]: I0122 10:19:02.626163 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerDied","Data":"03178ac2874f4f81adffcb762498ec47e23410877fb545527485b9b030a94b7b"} Jan 22 10:19:02 crc kubenswrapper[4892]: I0122 10:19:02.626345 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerStarted","Data":"8d6a6957ed2062d547e13704681ae3d30855720036d4821a241835028d2d1374"} Jan 22 10:19:02 crc kubenswrapper[4892]: I0122 10:19:02.629660 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:19:03 crc kubenswrapper[4892]: I0122 10:19:03.636346 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerStarted","Data":"e64064db564689621d327c113149c28145d1975892ec505a33fd88edd64ab90e"} Jan 22 10:19:04 crc kubenswrapper[4892]: I0122 10:19:04.647243 4892 generic.go:334] "Generic (PLEG): container finished" podID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerID="e64064db564689621d327c113149c28145d1975892ec505a33fd88edd64ab90e" exitCode=0 Jan 22 10:19:04 crc kubenswrapper[4892]: I0122 10:19:04.647355 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerDied","Data":"e64064db564689621d327c113149c28145d1975892ec505a33fd88edd64ab90e"} Jan 22 10:19:05 crc kubenswrapper[4892]: I0122 10:19:05.659982 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerStarted","Data":"d4a38708a0f0b83fbef6e5065e81a9f84a072b4ba7e0b17b20e27710b779f8fa"} Jan 22 10:19:05 crc kubenswrapper[4892]: I0122 10:19:05.682147 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxkbf" podStartSLOduration=2.234660249 podStartE2EDuration="4.682126848s" podCreationTimestamp="2026-01-22 10:19:01 +0000 UTC" firstStartedPulling="2026-01-22 10:19:02.629441207 +0000 UTC m=+4112.473520270" lastFinishedPulling="2026-01-22 10:19:05.076907806 +0000 UTC m=+4114.920986869" observedRunningTime="2026-01-22 10:19:05.677367967 +0000 UTC m=+4115.521447030" watchObservedRunningTime="2026-01-22 10:19:05.682126848 +0000 UTC m=+4115.526205911" Jan 22 10:19:11 crc kubenswrapper[4892]: I0122 10:19:11.548320 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:11 crc kubenswrapper[4892]: I0122 10:19:11.549838 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:11 crc kubenswrapper[4892]: I0122 10:19:11.599372 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:11 crc kubenswrapper[4892]: I0122 10:19:11.747925 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:11 crc kubenswrapper[4892]: I0122 10:19:11.835726 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkbf"] Jan 22 10:19:13 crc kubenswrapper[4892]: I0122 10:19:13.719983 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxkbf" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="registry-server" containerID="cri-o://d4a38708a0f0b83fbef6e5065e81a9f84a072b4ba7e0b17b20e27710b779f8fa" gracePeriod=2 Jan 22 10:19:14 crc kubenswrapper[4892]: I0122 10:19:14.729724 4892 generic.go:334] "Generic (PLEG): container finished" podID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerID="d4a38708a0f0b83fbef6e5065e81a9f84a072b4ba7e0b17b20e27710b779f8fa" exitCode=0 Jan 22 10:19:14 crc kubenswrapper[4892]: I0122 10:19:14.730065 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerDied","Data":"d4a38708a0f0b83fbef6e5065e81a9f84a072b4ba7e0b17b20e27710b779f8fa"} Jan 22 10:19:14 crc kubenswrapper[4892]: I0122 10:19:14.967471 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.081400 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm6ml\" (UniqueName: \"kubernetes.io/projected/09cc4779-e17f-49e1-8c6b-e27cc577befa-kube-api-access-dm6ml\") pod \"09cc4779-e17f-49e1-8c6b-e27cc577befa\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.081596 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-catalog-content\") pod \"09cc4779-e17f-49e1-8c6b-e27cc577befa\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.081650 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-utilities\") pod \"09cc4779-e17f-49e1-8c6b-e27cc577befa\" (UID: \"09cc4779-e17f-49e1-8c6b-e27cc577befa\") " Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.082464 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-utilities" (OuterVolumeSpecName: "utilities") pod "09cc4779-e17f-49e1-8c6b-e27cc577befa" (UID: "09cc4779-e17f-49e1-8c6b-e27cc577befa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.088077 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cc4779-e17f-49e1-8c6b-e27cc577befa-kube-api-access-dm6ml" (OuterVolumeSpecName: "kube-api-access-dm6ml") pod "09cc4779-e17f-49e1-8c6b-e27cc577befa" (UID: "09cc4779-e17f-49e1-8c6b-e27cc577befa"). InnerVolumeSpecName "kube-api-access-dm6ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.103946 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09cc4779-e17f-49e1-8c6b-e27cc577befa" (UID: "09cc4779-e17f-49e1-8c6b-e27cc577befa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.183988 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.184026 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cc4779-e17f-49e1-8c6b-e27cc577befa-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.184040 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm6ml\" (UniqueName: \"kubernetes.io/projected/09cc4779-e17f-49e1-8c6b-e27cc577befa-kube-api-access-dm6ml\") on node \"crc\" DevicePath \"\"" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.740425 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxkbf" event={"ID":"09cc4779-e17f-49e1-8c6b-e27cc577befa","Type":"ContainerDied","Data":"8d6a6957ed2062d547e13704681ae3d30855720036d4821a241835028d2d1374"} Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.740472 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxkbf" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.740484 4892 scope.go:117] "RemoveContainer" containerID="d4a38708a0f0b83fbef6e5065e81a9f84a072b4ba7e0b17b20e27710b779f8fa" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.766848 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkbf"] Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.768427 4892 scope.go:117] "RemoveContainer" containerID="e64064db564689621d327c113149c28145d1975892ec505a33fd88edd64ab90e" Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.775663 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxkbf"] Jan 22 10:19:15 crc kubenswrapper[4892]: I0122 10:19:15.787123 4892 scope.go:117] "RemoveContainer" containerID="03178ac2874f4f81adffcb762498ec47e23410877fb545527485b9b030a94b7b" Jan 22 10:19:17 crc kubenswrapper[4892]: I0122 10:19:17.429473 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" path="/var/lib/kubelet/pods/09cc4779-e17f-49e1-8c6b-e27cc577befa/volumes" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.177412 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6dfcx"] Jan 22 10:19:32 crc kubenswrapper[4892]: E0122 10:19:32.179170 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="extract-utilities" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.179198 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="extract-utilities" Jan 22 10:19:32 crc kubenswrapper[4892]: E0122 10:19:32.179228 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="registry-server" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.179235 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="registry-server" Jan 22 10:19:32 crc kubenswrapper[4892]: E0122 10:19:32.179254 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="extract-content" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.179261 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="extract-content" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.179507 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cc4779-e17f-49e1-8c6b-e27cc577befa" containerName="registry-server" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.181021 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.192232 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dfcx"] Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.383781 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-catalog-content\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.383893 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbxt\" (UniqueName: \"kubernetes.io/projected/f76b5de4-abec-47e7-b459-849a82311c4e-kube-api-access-2pbxt\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.383956 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-utilities\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.486302 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbxt\" (UniqueName: \"kubernetes.io/projected/f76b5de4-abec-47e7-b459-849a82311c4e-kube-api-access-2pbxt\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.486831 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-utilities\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.487341 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-utilities\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.487878 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-catalog-content\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.488226 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-catalog-content\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.507191 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbxt\" (UniqueName: \"kubernetes.io/projected/f76b5de4-abec-47e7-b459-849a82311c4e-kube-api-access-2pbxt\") pod \"certified-operators-6dfcx\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.508951 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:32 crc kubenswrapper[4892]: I0122 10:19:32.995045 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dfcx"] Jan 22 10:19:33 crc kubenswrapper[4892]: I0122 10:19:33.895888 4892 generic.go:334] "Generic (PLEG): container finished" podID="f76b5de4-abec-47e7-b459-849a82311c4e" containerID="6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377" exitCode=0 Jan 22 10:19:33 crc kubenswrapper[4892]: I0122 10:19:33.895933 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerDied","Data":"6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377"} Jan 22 10:19:33 crc kubenswrapper[4892]: I0122 10:19:33.896193 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerStarted","Data":"12dc78cf09012bb3b4df6fe812c86a715a797663e243e5f707608801e6645c92"} Jan 22 10:19:35 crc kubenswrapper[4892]: I0122 10:19:35.916085 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerStarted","Data":"6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241"} Jan 22 10:19:36 crc kubenswrapper[4892]: I0122 10:19:36.926564 4892 generic.go:334] "Generic (PLEG): container finished" podID="f76b5de4-abec-47e7-b459-849a82311c4e" containerID="6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241" exitCode=0 Jan 22 10:19:36 crc kubenswrapper[4892]: I0122 10:19:36.927441 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerDied","Data":"6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241"} Jan 22 10:19:37 crc kubenswrapper[4892]: I0122 10:19:37.937868 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerStarted","Data":"5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612"} Jan 22 10:19:37 crc kubenswrapper[4892]: I0122 10:19:37.969144 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6dfcx" podStartSLOduration=2.473008462 podStartE2EDuration="5.969125302s" podCreationTimestamp="2026-01-22 10:19:32 +0000 UTC" firstStartedPulling="2026-01-22 10:19:33.89780728 +0000 UTC m=+4143.741886343" lastFinishedPulling="2026-01-22 10:19:37.39392412 +0000 UTC m=+4147.238003183" observedRunningTime="2026-01-22 10:19:37.95995497 +0000 UTC m=+4147.804034033" watchObservedRunningTime="2026-01-22 10:19:37.969125302 +0000 UTC m=+4147.813204365" Jan 22 10:19:42 crc kubenswrapper[4892]: I0122 10:19:42.510231 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:42 crc kubenswrapper[4892]: I0122 10:19:42.510821 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:42 crc kubenswrapper[4892]: I0122 10:19:42.554695 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:43 crc kubenswrapper[4892]: I0122 10:19:43.031254 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:43 crc kubenswrapper[4892]: I0122 10:19:43.080377 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6dfcx"] Jan 22 10:19:44 crc kubenswrapper[4892]: I0122 10:19:44.990534 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6dfcx" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="registry-server" containerID="cri-o://5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612" gracePeriod=2 Jan 22 10:19:45 crc kubenswrapper[4892]: I0122 10:19:45.962997 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.004815 4892 generic.go:334] "Generic (PLEG): container finished" podID="f76b5de4-abec-47e7-b459-849a82311c4e" containerID="5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612" exitCode=0 Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.004860 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerDied","Data":"5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612"} Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.004880 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dfcx" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.004899 4892 scope.go:117] "RemoveContainer" containerID="5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.004888 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dfcx" event={"ID":"f76b5de4-abec-47e7-b459-849a82311c4e","Type":"ContainerDied","Data":"12dc78cf09012bb3b4df6fe812c86a715a797663e243e5f707608801e6645c92"} Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.024458 4892 scope.go:117] "RemoveContainer" containerID="6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.045360 4892 scope.go:117] "RemoveContainer" containerID="6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.062334 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-utilities" (OuterVolumeSpecName: "utilities") pod "f76b5de4-abec-47e7-b459-849a82311c4e" (UID: "f76b5de4-abec-47e7-b459-849a82311c4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.062409 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-utilities\") pod \"f76b5de4-abec-47e7-b459-849a82311c4e\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.062462 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pbxt\" (UniqueName: \"kubernetes.io/projected/f76b5de4-abec-47e7-b459-849a82311c4e-kube-api-access-2pbxt\") pod \"f76b5de4-abec-47e7-b459-849a82311c4e\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.062511 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-catalog-content\") pod \"f76b5de4-abec-47e7-b459-849a82311c4e\" (UID: \"f76b5de4-abec-47e7-b459-849a82311c4e\") " Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.062922 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.071851 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76b5de4-abec-47e7-b459-849a82311c4e-kube-api-access-2pbxt" (OuterVolumeSpecName: "kube-api-access-2pbxt") pod "f76b5de4-abec-47e7-b459-849a82311c4e" (UID: "f76b5de4-abec-47e7-b459-849a82311c4e"). InnerVolumeSpecName "kube-api-access-2pbxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.107206 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f76b5de4-abec-47e7-b459-849a82311c4e" (UID: "f76b5de4-abec-47e7-b459-849a82311c4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.128986 4892 scope.go:117] "RemoveContainer" containerID="5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612" Jan 22 10:19:46 crc kubenswrapper[4892]: E0122 10:19:46.129437 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612\": container with ID starting with 5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612 not found: ID does not exist" containerID="5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.129488 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612"} err="failed to get container status \"5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612\": rpc error: code = NotFound desc = could not find container \"5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612\": container with ID starting with 5debeb2bcf281b0f45466c92627871e9701d005636eff2c1ddd234b9cd21a612 not found: ID does not exist" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.129513 4892 scope.go:117] "RemoveContainer" containerID="6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241" Jan 22 10:19:46 crc kubenswrapper[4892]: E0122 10:19:46.129822 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241\": container with ID starting with 6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241 not found: ID does not exist" containerID="6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.129846 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241"} err="failed to get container status \"6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241\": rpc error: code = NotFound desc = could not find container \"6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241\": container with ID starting with 6dd9f0fc984c805514dddf576e3494c786c7faf82be153246f493344f6bbc241 not found: ID does not exist" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.129865 4892 scope.go:117] "RemoveContainer" containerID="6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377" Jan 22 10:19:46 crc kubenswrapper[4892]: E0122 10:19:46.130143 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377\": container with ID starting with 6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377 not found: ID does not exist" containerID="6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.130168 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377"} err="failed to get container status \"6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377\": rpc error: code = NotFound desc = could not find container \"6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377\": container with ID starting with 6acaaa9010135607a9b1d37e5ce80b7a12fa86a055c4b145a8d232427e59e377 not found: ID does not exist" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.166364 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pbxt\" (UniqueName: \"kubernetes.io/projected/f76b5de4-abec-47e7-b459-849a82311c4e-kube-api-access-2pbxt\") on node \"crc\" DevicePath \"\"" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.166399 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76b5de4-abec-47e7-b459-849a82311c4e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.323244 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.323329 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.345060 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6dfcx"] Jan 22 10:19:46 crc kubenswrapper[4892]: I0122 10:19:46.358439 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6dfcx"] Jan 22 10:19:47 crc kubenswrapper[4892]: I0122 10:19:47.442311 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" path="/var/lib/kubelet/pods/f76b5de4-abec-47e7-b459-849a82311c4e/volumes" Jan 22 10:20:16 crc kubenswrapper[4892]: I0122 10:20:16.323682 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:20:16 crc kubenswrapper[4892]: I0122 10:20:16.324315 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.308849 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4575g"] Jan 22 10:20:31 crc kubenswrapper[4892]: E0122 10:20:31.310372 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="registry-server" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.310390 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="registry-server" Jan 22 10:20:31 crc kubenswrapper[4892]: E0122 10:20:31.310431 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="extract-utilities" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.310439 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="extract-utilities" Jan 22 10:20:31 crc kubenswrapper[4892]: E0122 10:20:31.310470 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="extract-content" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.310478 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="extract-content" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.310703 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76b5de4-abec-47e7-b459-849a82311c4e" containerName="registry-server" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.312313 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.321079 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4575g"] Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.468564 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dfgd\" (UniqueName: \"kubernetes.io/projected/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-kube-api-access-7dfgd\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.469024 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-utilities\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.469126 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-catalog-content\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.571391 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-utilities\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.571984 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-utilities\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.572344 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-catalog-content\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.572660 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-catalog-content\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.572920 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dfgd\" (UniqueName: \"kubernetes.io/projected/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-kube-api-access-7dfgd\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.593528 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dfgd\" (UniqueName: \"kubernetes.io/projected/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-kube-api-access-7dfgd\") pod \"redhat-operators-4575g\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:31 crc kubenswrapper[4892]: I0122 10:20:31.636172 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:32 crc kubenswrapper[4892]: I0122 10:20:32.666572 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4575g"] Jan 22 10:20:33 crc kubenswrapper[4892]: I0122 10:20:33.398307 4892 generic.go:334] "Generic (PLEG): container finished" podID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerID="690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab" exitCode=0 Jan 22 10:20:33 crc kubenswrapper[4892]: I0122 10:20:33.398358 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerDied","Data":"690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab"} Jan 22 10:20:33 crc kubenswrapper[4892]: I0122 10:20:33.398829 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerStarted","Data":"9e389237050600c934a4a6030962a4cab63004ad9a8c3321d6635f2d8e082663"} Jan 22 10:20:34 crc kubenswrapper[4892]: I0122 10:20:34.408910 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerStarted","Data":"2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462"} Jan 22 10:20:35 crc kubenswrapper[4892]: I0122 10:20:35.423420 4892 generic.go:334] "Generic (PLEG): container finished" podID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerID="2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462" exitCode=0 Jan 22 10:20:35 crc kubenswrapper[4892]: I0122 10:20:35.435310 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerDied","Data":"2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462"} Jan 22 10:20:36 crc kubenswrapper[4892]: I0122 10:20:36.958124 4892 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xt4nm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 10:20:36 crc kubenswrapper[4892]: I0122 10:20:36.958178 4892 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xt4nm" podUID="02e012df-582c-41ec-9c63-ff6dd7cc08c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:20:39 crc kubenswrapper[4892]: I0122 10:20:39.162206 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerStarted","Data":"dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37"} Jan 22 10:20:39 crc kubenswrapper[4892]: I0122 10:20:39.184832 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4575g" podStartSLOduration=3.196540749 podStartE2EDuration="8.184811738s" podCreationTimestamp="2026-01-22 10:20:31 +0000 UTC" firstStartedPulling="2026-01-22 10:20:33.401080095 +0000 UTC m=+4203.245159158" lastFinishedPulling="2026-01-22 10:20:38.389351084 +0000 UTC m=+4208.233430147" observedRunningTime="2026-01-22 10:20:39.180803237 +0000 UTC m=+4209.024882300" watchObservedRunningTime="2026-01-22 10:20:39.184811738 +0000 UTC m=+4209.028890801" Jan 22 10:20:41 crc kubenswrapper[4892]: I0122 10:20:41.636742 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:41 crc kubenswrapper[4892]: I0122 10:20:41.637089 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:42 crc kubenswrapper[4892]: I0122 10:20:42.685436 4892 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4575g" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="registry-server" probeResult="failure" output=< Jan 22 10:20:42 crc kubenswrapper[4892]: timeout: failed to connect service ":50051" within 1s Jan 22 10:20:42 crc kubenswrapper[4892]: > Jan 22 10:20:46 crc kubenswrapper[4892]: I0122 10:20:46.323977 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:20:46 crc kubenswrapper[4892]: I0122 10:20:46.325157 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:20:46 crc kubenswrapper[4892]: I0122 10:20:46.325244 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:20:46 crc kubenswrapper[4892]: I0122 10:20:46.326010 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cec64c8c58561c336f39be908fd134c1f2197687f00578cd16620431d59c86d4"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:20:46 crc kubenswrapper[4892]: I0122 10:20:46.326060 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://cec64c8c58561c336f39be908fd134c1f2197687f00578cd16620431d59c86d4" gracePeriod=600 Jan 22 10:20:48 crc kubenswrapper[4892]: I0122 10:20:48.256769 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="cec64c8c58561c336f39be908fd134c1f2197687f00578cd16620431d59c86d4" exitCode=0 Jan 22 10:20:48 crc kubenswrapper[4892]: I0122 10:20:48.257059 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"cec64c8c58561c336f39be908fd134c1f2197687f00578cd16620431d59c86d4"} Jan 22 10:20:48 crc kubenswrapper[4892]: I0122 10:20:48.257090 4892 scope.go:117] "RemoveContainer" containerID="10bae86c283fa7a460d03cd4fb5e38372c678bc14876631954f38cd2968b54bf" Jan 22 10:20:49 crc kubenswrapper[4892]: I0122 10:20:49.267050 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6"} Jan 22 10:20:51 crc kubenswrapper[4892]: I0122 10:20:51.679117 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:51 crc kubenswrapper[4892]: I0122 10:20:51.729382 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:51 crc kubenswrapper[4892]: I0122 10:20:51.912797 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4575g"] Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.298176 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4575g" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="registry-server" containerID="cri-o://dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37" gracePeriod=2 Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.729885 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.831415 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-catalog-content\") pod \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.831503 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-utilities\") pod \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.831750 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dfgd\" (UniqueName: \"kubernetes.io/projected/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-kube-api-access-7dfgd\") pod \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\" (UID: \"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37\") " Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.832510 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-utilities" (OuterVolumeSpecName: "utilities") pod "7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" (UID: "7b1ab9b4-c19a-4a29-970f-0eb4403e2f37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.833542 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.838055 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-kube-api-access-7dfgd" (OuterVolumeSpecName: "kube-api-access-7dfgd") pod "7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" (UID: "7b1ab9b4-c19a-4a29-970f-0eb4403e2f37"). InnerVolumeSpecName "kube-api-access-7dfgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.936064 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dfgd\" (UniqueName: \"kubernetes.io/projected/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-kube-api-access-7dfgd\") on node \"crc\" DevicePath \"\"" Jan 22 10:20:53 crc kubenswrapper[4892]: I0122 10:20:53.959552 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" (UID: "7b1ab9b4-c19a-4a29-970f-0eb4403e2f37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.037842 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.308435 4892 generic.go:334] "Generic (PLEG): container finished" podID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerID="dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37" exitCode=0 Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.308477 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4575g" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.308480 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerDied","Data":"dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37"} Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.308511 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4575g" event={"ID":"7b1ab9b4-c19a-4a29-970f-0eb4403e2f37","Type":"ContainerDied","Data":"9e389237050600c934a4a6030962a4cab63004ad9a8c3321d6635f2d8e082663"} Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.308530 4892 scope.go:117] "RemoveContainer" containerID="dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.327342 4892 scope.go:117] "RemoveContainer" containerID="2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.358409 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4575g"] Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.367128 4892 scope.go:117] "RemoveContainer" containerID="690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.369394 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4575g"] Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.401020 4892 scope.go:117] "RemoveContainer" containerID="dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37" Jan 22 10:20:54 crc kubenswrapper[4892]: E0122 10:20:54.401813 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37\": container with ID starting with dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37 not found: ID does not exist" containerID="dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.402037 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37"} err="failed to get container status \"dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37\": rpc error: code = NotFound desc = could not find container \"dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37\": container with ID starting with dfcf2b87e3a92940d9dd40575d13670003db105ca918be417ffe1f8b5b19ee37 not found: ID does not exist" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.402155 4892 scope.go:117] "RemoveContainer" containerID="2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462" Jan 22 10:20:54 crc kubenswrapper[4892]: E0122 10:20:54.403655 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462\": container with ID starting with 2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462 not found: ID does not exist" containerID="2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.404122 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462"} err="failed to get container status \"2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462\": rpc error: code = NotFound desc = could not find container \"2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462\": container with ID starting with 2b6bed2a4277ef7e2f144b02af5584a9389faad335b26469eb2ca250eed91462 not found: ID does not exist" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.404227 4892 scope.go:117] "RemoveContainer" containerID="690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab" Jan 22 10:20:54 crc kubenswrapper[4892]: E0122 10:20:54.404964 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab\": container with ID starting with 690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab not found: ID does not exist" containerID="690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab" Jan 22 10:20:54 crc kubenswrapper[4892]: I0122 10:20:54.405009 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab"} err="failed to get container status \"690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab\": rpc error: code = NotFound desc = could not find container \"690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab\": container with ID starting with 690229e54fffd2af53f4d9e806190ed2da5818a61461c84c8cdbb4e131e54fab not found: ID does not exist" Jan 22 10:20:55 crc kubenswrapper[4892]: I0122 10:20:55.429983 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" path="/var/lib/kubelet/pods/7b1ab9b4-c19a-4a29-970f-0eb4403e2f37/volumes" Jan 22 10:23:16 crc kubenswrapper[4892]: I0122 10:23:16.323761 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:23:16 crc kubenswrapper[4892]: I0122 10:23:16.324302 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:23:46 crc kubenswrapper[4892]: I0122 10:23:46.323964 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:23:46 crc kubenswrapper[4892]: I0122 10:23:46.324571 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:24:06 crc kubenswrapper[4892]: I0122 10:24:06.042097 4892 generic.go:334] "Generic (PLEG): container finished" podID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerID="a32dde9898a572cc0dd463aaaee71a5dd675112fa151f7af9467d42a3c49debe" exitCode=0 Jan 22 10:24:06 crc kubenswrapper[4892]: I0122 10:24:06.042188 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" event={"ID":"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3","Type":"ContainerDied","Data":"a32dde9898a572cc0dd463aaaee71a5dd675112fa151f7af9467d42a3c49debe"} Jan 22 10:24:06 crc kubenswrapper[4892]: I0122 10:24:06.044031 4892 scope.go:117] "RemoveContainer" containerID="a32dde9898a572cc0dd463aaaee71a5dd675112fa151f7af9467d42a3c49debe" Jan 22 10:24:06 crc kubenswrapper[4892]: I0122 10:24:06.222073 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qgdnd_must-gather-z7s6w_7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3/gather/0.log" Jan 22 10:24:13 crc kubenswrapper[4892]: I0122 10:24:13.787230 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qgdnd/must-gather-z7s6w"] Jan 22 10:24:13 crc kubenswrapper[4892]: I0122 10:24:13.788094 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="copy" containerID="cri-o://505f593942a9adf3f54b17348e63bd5ac5794e476bdab09c68f356daae6c2c46" gracePeriod=2 Jan 22 10:24:13 crc kubenswrapper[4892]: I0122 10:24:13.801617 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qgdnd/must-gather-z7s6w"] Jan 22 10:24:13 crc kubenswrapper[4892]: E0122 10:24:13.910743 4892 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c19bfc9_9e78_418b_bddd_ab04e8d7a0f3.slice/crio-505f593942a9adf3f54b17348e63bd5ac5794e476bdab09c68f356daae6c2c46.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.110873 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qgdnd_must-gather-z7s6w_7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3/copy/0.log" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.111681 4892 generic.go:334] "Generic (PLEG): container finished" podID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerID="505f593942a9adf3f54b17348e63bd5ac5794e476bdab09c68f356daae6c2c46" exitCode=143 Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.428604 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qgdnd_must-gather-z7s6w_7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3/copy/0.log" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.429357 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.456157 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm95z\" (UniqueName: \"kubernetes.io/projected/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-kube-api-access-gm95z\") pod \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.456444 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-must-gather-output\") pod \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\" (UID: \"7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3\") " Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.462526 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-kube-api-access-gm95z" (OuterVolumeSpecName: "kube-api-access-gm95z") pod "7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" (UID: "7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3"). InnerVolumeSpecName "kube-api-access-gm95z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.558599 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm95z\" (UniqueName: \"kubernetes.io/projected/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-kube-api-access-gm95z\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.642394 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" (UID: "7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:24:14 crc kubenswrapper[4892]: I0122 10:24:14.661141 4892 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:15 crc kubenswrapper[4892]: I0122 10:24:15.122445 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qgdnd_must-gather-z7s6w_7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3/copy/0.log" Jan 22 10:24:15 crc kubenswrapper[4892]: I0122 10:24:15.123123 4892 scope.go:117] "RemoveContainer" containerID="505f593942a9adf3f54b17348e63bd5ac5794e476bdab09c68f356daae6c2c46" Jan 22 10:24:15 crc kubenswrapper[4892]: I0122 10:24:15.123178 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qgdnd/must-gather-z7s6w" Jan 22 10:24:15 crc kubenswrapper[4892]: I0122 10:24:15.143910 4892 scope.go:117] "RemoveContainer" containerID="a32dde9898a572cc0dd463aaaee71a5dd675112fa151f7af9467d42a3c49debe" Jan 22 10:24:15 crc kubenswrapper[4892]: I0122 10:24:15.429880 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" path="/var/lib/kubelet/pods/7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3/volumes" Jan 22 10:24:16 crc kubenswrapper[4892]: I0122 10:24:16.323182 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:24:16 crc kubenswrapper[4892]: I0122 10:24:16.323255 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:24:16 crc kubenswrapper[4892]: I0122 10:24:16.323319 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:24:16 crc kubenswrapper[4892]: I0122 10:24:16.324112 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:24:16 crc kubenswrapper[4892]: I0122 10:24:16.324186 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" gracePeriod=600 Jan 22 10:24:16 crc kubenswrapper[4892]: E0122 10:24:16.966816 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:24:17 crc kubenswrapper[4892]: I0122 10:24:17.147639 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" exitCode=0 Jan 22 10:24:17 crc kubenswrapper[4892]: I0122 10:24:17.147650 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6"} Jan 22 10:24:17 crc kubenswrapper[4892]: I0122 10:24:17.148021 4892 scope.go:117] "RemoveContainer" containerID="cec64c8c58561c336f39be908fd134c1f2197687f00578cd16620431d59c86d4" Jan 22 10:24:17 crc kubenswrapper[4892]: I0122 10:24:17.148994 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:24:17 crc kubenswrapper[4892]: E0122 10:24:17.149344 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.748118 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l4v57"] Jan 22 10:24:18 crc kubenswrapper[4892]: E0122 10:24:18.748951 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="gather" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.748969 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="gather" Jan 22 10:24:18 crc kubenswrapper[4892]: E0122 10:24:18.749002 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="extract-utilities" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749010 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="extract-utilities" Jan 22 10:24:18 crc kubenswrapper[4892]: E0122 10:24:18.749019 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="copy" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749028 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="copy" Jan 22 10:24:18 crc kubenswrapper[4892]: E0122 10:24:18.749052 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="registry-server" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749060 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="registry-server" Jan 22 10:24:18 crc kubenswrapper[4892]: E0122 10:24:18.749098 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="extract-content" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749105 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="extract-content" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749610 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1ab9b4-c19a-4a29-970f-0eb4403e2f37" containerName="registry-server" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749646 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="gather" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.749670 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c19bfc9-9e78-418b-bddd-ab04e8d7a0f3" containerName="copy" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.753607 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.766732 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4v57"] Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.834130 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-catalog-content\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.835049 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-utilities\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.835419 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hzrl\" (UniqueName: \"kubernetes.io/projected/4034fb51-0dce-457a-b7c8-84e183ed8c90-kube-api-access-9hzrl\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.936380 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-catalog-content\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.936480 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-utilities\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.936507 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hzrl\" (UniqueName: \"kubernetes.io/projected/4034fb51-0dce-457a-b7c8-84e183ed8c90-kube-api-access-9hzrl\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.937045 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-catalog-content\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:18 crc kubenswrapper[4892]: I0122 10:24:18.937258 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-utilities\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:19 crc kubenswrapper[4892]: I0122 10:24:19.061497 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hzrl\" (UniqueName: \"kubernetes.io/projected/4034fb51-0dce-457a-b7c8-84e183ed8c90-kube-api-access-9hzrl\") pod \"community-operators-l4v57\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:19 crc kubenswrapper[4892]: I0122 10:24:19.117800 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:19 crc kubenswrapper[4892]: I0122 10:24:19.671247 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4v57"] Jan 22 10:24:20 crc kubenswrapper[4892]: I0122 10:24:20.181761 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerStarted","Data":"f05b13347c50a0cbd38d85b34cbeef73a657844cb689e88e3e055047e00c9fcc"} Jan 22 10:24:21 crc kubenswrapper[4892]: I0122 10:24:21.191544 4892 generic.go:334] "Generic (PLEG): container finished" podID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerID="42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c" exitCode=0 Jan 22 10:24:21 crc kubenswrapper[4892]: I0122 10:24:21.191607 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerDied","Data":"42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c"} Jan 22 10:24:21 crc kubenswrapper[4892]: I0122 10:24:21.194058 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:24:27 crc kubenswrapper[4892]: I0122 10:24:27.247652 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerStarted","Data":"95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e"} Jan 22 10:24:28 crc kubenswrapper[4892]: I0122 10:24:28.258480 4892 generic.go:334] "Generic (PLEG): container finished" podID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerID="95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e" exitCode=0 Jan 22 10:24:28 crc kubenswrapper[4892]: I0122 10:24:28.258569 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerDied","Data":"95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e"} Jan 22 10:24:29 crc kubenswrapper[4892]: I0122 10:24:29.269773 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerStarted","Data":"28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8"} Jan 22 10:24:29 crc kubenswrapper[4892]: I0122 10:24:29.293340 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l4v57" podStartSLOduration=3.774170035 podStartE2EDuration="11.293318335s" podCreationTimestamp="2026-01-22 10:24:18 +0000 UTC" firstStartedPulling="2026-01-22 10:24:21.193831277 +0000 UTC m=+4431.037910340" lastFinishedPulling="2026-01-22 10:24:28.712979577 +0000 UTC m=+4438.557058640" observedRunningTime="2026-01-22 10:24:29.289979951 +0000 UTC m=+4439.134059034" watchObservedRunningTime="2026-01-22 10:24:29.293318335 +0000 UTC m=+4439.137397418" Jan 22 10:24:31 crc kubenswrapper[4892]: I0122 10:24:31.427177 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:24:31 crc kubenswrapper[4892]: E0122 10:24:31.427835 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:24:39 crc kubenswrapper[4892]: I0122 10:24:39.118123 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:39 crc kubenswrapper[4892]: I0122 10:24:39.118693 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:39 crc kubenswrapper[4892]: I0122 10:24:39.302587 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:39 crc kubenswrapper[4892]: I0122 10:24:39.398567 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:39 crc kubenswrapper[4892]: I0122 10:24:39.539995 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4v57"] Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.369046 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l4v57" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="registry-server" containerID="cri-o://28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8" gracePeriod=2 Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.824499 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.874072 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-catalog-content\") pod \"4034fb51-0dce-457a-b7c8-84e183ed8c90\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.874126 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-utilities\") pod \"4034fb51-0dce-457a-b7c8-84e183ed8c90\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.874233 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hzrl\" (UniqueName: \"kubernetes.io/projected/4034fb51-0dce-457a-b7c8-84e183ed8c90-kube-api-access-9hzrl\") pod \"4034fb51-0dce-457a-b7c8-84e183ed8c90\" (UID: \"4034fb51-0dce-457a-b7c8-84e183ed8c90\") " Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.875143 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-utilities" (OuterVolumeSpecName: "utilities") pod "4034fb51-0dce-457a-b7c8-84e183ed8c90" (UID: "4034fb51-0dce-457a-b7c8-84e183ed8c90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.889259 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4034fb51-0dce-457a-b7c8-84e183ed8c90-kube-api-access-9hzrl" (OuterVolumeSpecName: "kube-api-access-9hzrl") pod "4034fb51-0dce-457a-b7c8-84e183ed8c90" (UID: "4034fb51-0dce-457a-b7c8-84e183ed8c90"). InnerVolumeSpecName "kube-api-access-9hzrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.932245 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4034fb51-0dce-457a-b7c8-84e183ed8c90" (UID: "4034fb51-0dce-457a-b7c8-84e183ed8c90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.976802 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hzrl\" (UniqueName: \"kubernetes.io/projected/4034fb51-0dce-457a-b7c8-84e183ed8c90-kube-api-access-9hzrl\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.976848 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:41 crc kubenswrapper[4892]: I0122 10:24:41.976859 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034fb51-0dce-457a-b7c8-84e183ed8c90-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.382449 4892 generic.go:334] "Generic (PLEG): container finished" podID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerID="28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8" exitCode=0 Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.382815 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4v57" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.382855 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerDied","Data":"28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8"} Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.382932 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4v57" event={"ID":"4034fb51-0dce-457a-b7c8-84e183ed8c90","Type":"ContainerDied","Data":"f05b13347c50a0cbd38d85b34cbeef73a657844cb689e88e3e055047e00c9fcc"} Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.382990 4892 scope.go:117] "RemoveContainer" containerID="28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.411148 4892 scope.go:117] "RemoveContainer" containerID="95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.420690 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4v57"] Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.432156 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l4v57"] Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.446256 4892 scope.go:117] "RemoveContainer" containerID="42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.496396 4892 scope.go:117] "RemoveContainer" containerID="28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8" Jan 22 10:24:42 crc kubenswrapper[4892]: E0122 10:24:42.497085 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8\": container with ID starting with 28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8 not found: ID does not exist" containerID="28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.497128 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8"} err="failed to get container status \"28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8\": rpc error: code = NotFound desc = could not find container \"28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8\": container with ID starting with 28652a7502adbdc850813c5a9dd8e07905cbc41e027abc23f8e2a32416c48fb8 not found: ID does not exist" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.497153 4892 scope.go:117] "RemoveContainer" containerID="95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e" Jan 22 10:24:42 crc kubenswrapper[4892]: E0122 10:24:42.497760 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e\": container with ID starting with 95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e not found: ID does not exist" containerID="95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.497788 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e"} err="failed to get container status \"95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e\": rpc error: code = NotFound desc = could not find container \"95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e\": container with ID starting with 95630acb463ae77568b77b069919ad3f3c6d90ba2258c882b3dadf208faf797e not found: ID does not exist" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.497805 4892 scope.go:117] "RemoveContainer" containerID="42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c" Jan 22 10:24:42 crc kubenswrapper[4892]: E0122 10:24:42.498181 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c\": container with ID starting with 42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c not found: ID does not exist" containerID="42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c" Jan 22 10:24:42 crc kubenswrapper[4892]: I0122 10:24:42.498236 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c"} err="failed to get container status \"42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c\": rpc error: code = NotFound desc = could not find container \"42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c\": container with ID starting with 42546ba6d322a29ba7112d4b81e8accdc9393d7a600f3b89718a4aac990baf4c not found: ID does not exist" Jan 22 10:24:43 crc kubenswrapper[4892]: I0122 10:24:43.418969 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:24:43 crc kubenswrapper[4892]: E0122 10:24:43.419405 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:24:43 crc kubenswrapper[4892]: I0122 10:24:43.430266 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" path="/var/lib/kubelet/pods/4034fb51-0dce-457a-b7c8-84e183ed8c90/volumes" Jan 22 10:24:58 crc kubenswrapper[4892]: I0122 10:24:58.418880 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:24:58 crc kubenswrapper[4892]: E0122 10:24:58.420753 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:25:12 crc kubenswrapper[4892]: I0122 10:25:12.418890 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:25:12 crc kubenswrapper[4892]: E0122 10:25:12.419555 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:25:26 crc kubenswrapper[4892]: I0122 10:25:26.418620 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:25:26 crc kubenswrapper[4892]: E0122 10:25:26.419356 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:25:41 crc kubenswrapper[4892]: I0122 10:25:41.426574 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:25:41 crc kubenswrapper[4892]: E0122 10:25:41.427572 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.079742 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cnzvv/must-gather-7lf9l"] Jan 22 10:25:44 crc kubenswrapper[4892]: E0122 10:25:44.081244 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="extract-utilities" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.081264 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="extract-utilities" Jan 22 10:25:44 crc kubenswrapper[4892]: E0122 10:25:44.081324 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="registry-server" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.081332 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="registry-server" Jan 22 10:25:44 crc kubenswrapper[4892]: E0122 10:25:44.081353 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="extract-content" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.081360 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="extract-content" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.081634 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="4034fb51-0dce-457a-b7c8-84e183ed8c90" containerName="registry-server" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.083017 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.087523 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cnzvv"/"default-dockercfg-k5k7q" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.087754 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cnzvv"/"openshift-service-ca.crt" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.088232 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cnzvv"/"kube-root-ca.crt" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.117002 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cnzvv/must-gather-7lf9l"] Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.211367 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-must-gather-output\") pod \"must-gather-7lf9l\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.211438 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfjh\" (UniqueName: \"kubernetes.io/projected/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-kube-api-access-hlfjh\") pod \"must-gather-7lf9l\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.319297 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-must-gather-output\") pod \"must-gather-7lf9l\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.319416 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfjh\" (UniqueName: \"kubernetes.io/projected/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-kube-api-access-hlfjh\") pod \"must-gather-7lf9l\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.320493 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-must-gather-output\") pod \"must-gather-7lf9l\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.354764 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfjh\" (UniqueName: \"kubernetes.io/projected/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-kube-api-access-hlfjh\") pod \"must-gather-7lf9l\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.420471 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:25:44 crc kubenswrapper[4892]: I0122 10:25:44.911997 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cnzvv/must-gather-7lf9l"] Jan 22 10:25:45 crc kubenswrapper[4892]: I0122 10:25:45.908660 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" event={"ID":"c55dcac5-e0d5-4593-9edb-eb5f847f8d47","Type":"ContainerStarted","Data":"82de0404811546e3726641b9f35c0ed7b004346161307e870d6d006f2eaaa6e6"} Jan 22 10:25:45 crc kubenswrapper[4892]: I0122 10:25:45.909008 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" event={"ID":"c55dcac5-e0d5-4593-9edb-eb5f847f8d47","Type":"ContainerStarted","Data":"077ce771e2d474061e17ea009a8159e383a769bdaa861a3130e9e49df88c75f8"} Jan 22 10:25:45 crc kubenswrapper[4892]: I0122 10:25:45.909024 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" event={"ID":"c55dcac5-e0d5-4593-9edb-eb5f847f8d47","Type":"ContainerStarted","Data":"bda7a26fb0408d837c32b32633dc6ec6f9b769a6355c9c280248dac9ab43d1c4"} Jan 22 10:25:45 crc kubenswrapper[4892]: I0122 10:25:45.934623 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" podStartSLOduration=1.934596837 podStartE2EDuration="1.934596837s" podCreationTimestamp="2026-01-22 10:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:25:45.928488893 +0000 UTC m=+4515.772567956" watchObservedRunningTime="2026-01-22 10:25:45.934596837 +0000 UTC m=+4515.778675900" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.692236 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-g4qxb"] Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.694193 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.826687 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22gj\" (UniqueName: \"kubernetes.io/projected/4cfbac31-0f60-48cc-91e8-7354a7166278-kube-api-access-p22gj\") pod \"crc-debug-g4qxb\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.827129 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfbac31-0f60-48cc-91e8-7354a7166278-host\") pod \"crc-debug-g4qxb\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.929151 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfbac31-0f60-48cc-91e8-7354a7166278-host\") pod \"crc-debug-g4qxb\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.929258 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p22gj\" (UniqueName: \"kubernetes.io/projected/4cfbac31-0f60-48cc-91e8-7354a7166278-kube-api-access-p22gj\") pod \"crc-debug-g4qxb\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.929341 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfbac31-0f60-48cc-91e8-7354a7166278-host\") pod \"crc-debug-g4qxb\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:49 crc kubenswrapper[4892]: I0122 10:25:49.954941 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22gj\" (UniqueName: \"kubernetes.io/projected/4cfbac31-0f60-48cc-91e8-7354a7166278-kube-api-access-p22gj\") pod \"crc-debug-g4qxb\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:50 crc kubenswrapper[4892]: I0122 10:25:50.014325 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:25:50 crc kubenswrapper[4892]: I0122 10:25:50.950074 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" event={"ID":"4cfbac31-0f60-48cc-91e8-7354a7166278","Type":"ContainerStarted","Data":"8cf7b1d8d282345d9ebdccddb5bec3eab402fe184108f3cd61e4deab96029ff9"} Jan 22 10:25:50 crc kubenswrapper[4892]: I0122 10:25:50.950535 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" event={"ID":"4cfbac31-0f60-48cc-91e8-7354a7166278","Type":"ContainerStarted","Data":"bd4f6be665c7ca728db772673200dac020671515a97396a7be8a981e12ed419e"} Jan 22 10:25:50 crc kubenswrapper[4892]: I0122 10:25:50.968428 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" podStartSLOduration=1.968408484 podStartE2EDuration="1.968408484s" podCreationTimestamp="2026-01-22 10:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:25:50.963208793 +0000 UTC m=+4520.807287856" watchObservedRunningTime="2026-01-22 10:25:50.968408484 +0000 UTC m=+4520.812487547" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.521966 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59ddd484c6-7p5xf_b812f439-988c-4120-8b36-e21df38c2b97/barbican-api-log/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.532252 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59ddd484c6-7p5xf_b812f439-988c-4120-8b36-e21df38c2b97/barbican-api/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.580464 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cbdbc497d-dqskw_4d7e7ea0-d123-41ab-bd59-0f6da52316bd/barbican-keystone-listener-log/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.588457 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cbdbc497d-dqskw_4d7e7ea0-d123-41ab-bd59-0f6da52316bd/barbican-keystone-listener/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.614470 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c7f45b4bf-xx9p2_1a6b0877-2c23-4ebd-a433-620571e4c0bf/barbican-worker-log/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.625491 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c7f45b4bf-xx9p2_1a6b0877-2c23-4ebd-a433-620571e4c0bf/barbican-worker/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.662044 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2dcnk_3ca49e96-a4fc-4e54-bb55-b32d42d72734/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.700270 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/ceilometer-central-agent/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.731238 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/ceilometer-notification-agent/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.738967 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/sg-core/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.751802 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b08f54a7-5e8e-4143-8585-1c91201b25df/proxy-httpd/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.773723 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_05319583-8c6d-43a9-88b6-1cba9781f85b/cinder-api-log/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.871234 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_05319583-8c6d-43a9-88b6-1cba9781f85b/cinder-api/0.log" Jan 22 10:25:52 crc kubenswrapper[4892]: I0122 10:25:52.952877 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_808833e1-7e58-4b7e-a1bb-ff5cc72b5b35/cinder-scheduler/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.011192 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_808833e1-7e58-4b7e-a1bb-ff5cc72b5b35/probe/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.036105 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g4gkj_ab573651-bad0-413d-9c16-46aac4818b9b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.063842 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tqbhz_7ded4dd1-51b6-427d-8f8f-44da3828ef6b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.120632 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-nt7cg_e4d2f9f5-3308-487a-871d-b411f6951ead/dnsmasq-dns/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.127777 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66fc59ccbf-nt7cg_e4d2f9f5-3308-487a-871d-b411f6951ead/init/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.168965 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-65kt2_9cd3e716-8070-42ec-87ad-4fc03fe2be23/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.183221 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e16d2673-ef7d-40c6-b1ae-c43fc8771d30/glance-log/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.212235 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e16d2673-ef7d-40c6-b1ae-c43fc8771d30/glance-httpd/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.228178 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0d04b37-82ff-4c76-ab88-4602d405c9e0/glance-log/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.273193 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0d04b37-82ff-4c76-ab88-4602d405c9e0/glance-httpd/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.745270 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bd8749ddb-x9h4l_a434b179-017a-4112-a673-1859114a62ed/horizon-log/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.864481 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bd8749ddb-x9h4l_a434b179-017a-4112-a673-1859114a62ed/horizon/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.890095 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-276h4_1f22ae69-a8dd-4646-836d-d48376094ceb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:53 crc kubenswrapper[4892]: I0122 10:25:53.923665 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mdwhd_402a9581-6783-46e0-8147-2e443d9a0608/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:54 crc kubenswrapper[4892]: I0122 10:25:54.133372 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7647f5f4ff-hmkw9_6ab62f99-9658-4ad6-be05-4f0849b6d6d5/keystone-api/0.log" Jan 22 10:25:54 crc kubenswrapper[4892]: I0122 10:25:54.142082 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484601-vb56j_675953a2-7c44-4857-a9f6-47dcb2049507/keystone-cron/0.log" Jan 22 10:25:54 crc kubenswrapper[4892]: I0122 10:25:54.152775 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6948adf9-b332-4b21-82e2-444fc998ebe5/kube-state-metrics/0.log" Jan 22 10:25:54 crc kubenswrapper[4892]: I0122 10:25:54.191945 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-x7bbg_38fc771d-608b-4a8e-a7ec-7cfa932abc41/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:25:56 crc kubenswrapper[4892]: I0122 10:25:56.418727 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:25:56 crc kubenswrapper[4892]: E0122 10:25:56.419621 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:26:07 crc kubenswrapper[4892]: I0122 10:26:07.420768 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:26:07 crc kubenswrapper[4892]: E0122 10:26:07.421546 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:26:12 crc kubenswrapper[4892]: I0122 10:26:12.382625 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d1581ad3-031b-451b-a8a7-bea327cf4ecd/memcached/0.log" Jan 22 10:26:12 crc kubenswrapper[4892]: I0122 10:26:12.532963 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b7dcc6b6f-vkw7t_6d80b524-788b-4fdf-b8bf-28ae522512e1/neutron-api/0.log" Jan 22 10:26:12 crc kubenswrapper[4892]: I0122 10:26:12.596248 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b7dcc6b6f-vkw7t_6d80b524-788b-4fdf-b8bf-28ae522512e1/neutron-httpd/0.log" Jan 22 10:26:12 crc kubenswrapper[4892]: I0122 10:26:12.622823 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bbfjq_e8f16545-12e1-4084-84f3-a3598a939eaf/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:12 crc kubenswrapper[4892]: I0122 10:26:12.892147 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fb0b7944-3391-4c47-91a6-47c3aa62442a/nova-api-log/0.log" Jan 22 10:26:13 crc kubenswrapper[4892]: I0122 10:26:13.424120 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fb0b7944-3391-4c47-91a6-47c3aa62442a/nova-api-api/0.log" Jan 22 10:26:13 crc kubenswrapper[4892]: I0122 10:26:13.623944 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_67407100-b6b3-4802-9bb1-337db9cbb3e6/nova-cell0-conductor-conductor/0.log" Jan 22 10:26:13 crc kubenswrapper[4892]: I0122 10:26:13.777592 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c1860910-1d6f-45fc-b0ce-7aef22083de7/nova-cell1-conductor-conductor/0.log" Jan 22 10:26:13 crc kubenswrapper[4892]: I0122 10:26:13.927663 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_41e0d5e8-eee8-4d06-ae1f-fec66e793078/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 10:26:13 crc kubenswrapper[4892]: I0122 10:26:13.981819 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vnhgw_12c8d866-32b3-4952-bffa-4993dd9dede1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:14 crc kubenswrapper[4892]: I0122 10:26:14.080975 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_51bca3ef-0b5c-4c51-bf42-95ad11eba3be/nova-metadata-log/0.log" Jan 22 10:26:14 crc kubenswrapper[4892]: I0122 10:26:14.205431 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/controller/0.log" Jan 22 10:26:14 crc kubenswrapper[4892]: I0122 10:26:14.211663 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/kube-rbac-proxy/0.log" Jan 22 10:26:14 crc kubenswrapper[4892]: I0122 10:26:14.230737 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/controller/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.231616 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_51bca3ef-0b5c-4c51-bf42-95ad11eba3be/nova-metadata-metadata/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.510782 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_27216438-d79d-4606-8ac6-6636fc9b6e06/nova-scheduler-scheduler/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.539696 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5dcc844d-f681-4c5c-acb5-0edc57e32a0f/galera/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.548640 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.552746 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5dcc844d-f681-4c5c-acb5-0edc57e32a0f/mysql-bootstrap/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.558075 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/reloader/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.565627 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr-metrics/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.577955 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.581955 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa34c3fd-3e21-49ac-becd-283928666ff2/galera/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.587482 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy-frr/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.593761 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa34c3fd-3e21-49ac-becd-283928666ff2/mysql-bootstrap/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.597000 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-frr-files/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.601239 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d9a4d1e6-4981-477c-b2cf-8a132de2c1d9/openstackclient/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.605662 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-reloader/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.615686 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-metrics/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.620809 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-f8rll_91fb6665-4bf4-4558-abf7-788627c34a1c/ovn-controller/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.635550 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jh6lw_c16202bf-0e93-4bb8-96fb-cf6537ea21e6/openstack-network-exporter/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.635856 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-drtv6_667e6efb-6488-461d-8e5f-380e05c4956e/frr-k8s-webhook-server/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.649292 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snr9q_2d7ca514-734a-4ab1-890f-b04a1549c073/ovsdb-server/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.659454 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snr9q_2d7ca514-734a-4ab1-890f-b04a1549c073/ovs-vswitchd/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.666849 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snr9q_2d7ca514-734a-4ab1-890f-b04a1549c073/ovsdb-server-init/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.669745 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6744fff56c-5c2wg_e2e7c48f-6e23-4156-b679-30f2d9735501/manager/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.680123 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bdbd58466-bwr22_b49a3e83-8e00-4934-8968-97d1905959d0/webhook-server/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.715427 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cqxm8_387b75ce-f980-4a8c-a230-15522ca7b923/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.731900 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_90d16687-ebb2-43f7-bdf4-04334f5895d7/ovn-northd/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.745735 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_90d16687-ebb2-43f7-bdf4-04334f5895d7/openstack-network-exporter/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.764010 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74af166d-c2f0-43b1-a516-e1d393e873b4/ovsdbserver-nb/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.773524 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74af166d-c2f0-43b1-a516-e1d393e873b4/openstack-network-exporter/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.789069 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ed3e46d9-e9ca-453a-92a3-a07471597296/ovsdbserver-sb/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.794627 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ed3e46d9-e9ca-453a-92a3-a07471597296/openstack-network-exporter/0.log" Jan 22 10:26:16 crc kubenswrapper[4892]: I0122 10:26:16.946472 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cff6669d-x8cnv_fa434e36-332b-401e-99b3-2dcb7d75da94/placement-log/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.110221 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cff6669d-x8cnv_fa434e36-332b-401e-99b3-2dcb7d75da94/placement-api/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.138712 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57552917-a09b-4f52-96b5-c7749b9af779/rabbitmq/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.145917 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57552917-a09b-4f52-96b5-c7749b9af779/setup-container/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.179025 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_30fa58bc-46e3-40c4-ad73-3f2e1f8341dd/rabbitmq/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.186646 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_30fa58bc-46e3-40c4-ad73-3f2e1f8341dd/setup-container/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.204679 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/speaker/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.205417 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nwf54_cc609164-f9fe-4caf-ae10-ed043d1091fe/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.210236 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/kube-rbac-proxy/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.215390 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-48xvm_4316ad67-9810-4253-bcfb-faa1b9936429/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.224321 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tvjg5_8bb695bf-11e7-478a-a348-2a06ef0bcdaf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.243809 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8d76w_fac9a973-588e-43e5-b6d1-530127ccccad/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.256681 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-l6df8_9486c77a-626c-488a-a958-d717027e31db/ssh-known-hosts-edpm-deployment/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.390879 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-674547b56f-gvjxm_accdf866-14d0-4308-a8d7-c598fde46122/proxy-httpd/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.410099 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-674547b56f-gvjxm_accdf866-14d0-4308-a8d7-c598fde46122/proxy-server/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.421866 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x9lwk_b23a9c04-b07c-4dd1-a475-7b1d70b9bddc/swift-ring-rebalance/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.447488 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-server/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.486702 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-replicator/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.493157 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-auditor/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.503035 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/account-reaper/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.526523 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-server/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.565878 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-replicator/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.578029 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-auditor/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.595069 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/container-updater/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.604935 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-server/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.639488 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-replicator/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.668908 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-auditor/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.680008 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-updater/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.692010 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/object-expirer/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.702826 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/rsync/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.712427 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0c2888e-984a-482d-b7a3-5de66720aaf8/swift-recon-cron/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.784384 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dhd4w_cdb08ec6-d82f-4ea7-b6af-170f51b46949/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.804685 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_13171535-bfb7-4114-884d-b9b031615de3/tempest-tests-tempest-tests-runner/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.813061 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6efa520d-5ef8-49b9-b90f-197efdf100ed/test-operator-logs-container/0.log" Jan 22 10:26:17 crc kubenswrapper[4892]: I0122 10:26:17.837630 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6lb4t_49d64b56-37f0-45f2-8aec-a3dfbf171f09/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.659078 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-mcfls_f7ec268a-c82e-455e-b4b9-d0f96998c015/manager/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.716523 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-fnrjr_815dba39-30ed-4471-bf04-ecc573373016/manager/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.735881 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sx9p8_c020c33f-f12c-47ce-9639-c0069dff8bc4/manager/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.764568 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/extract/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.792006 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/util/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.806572 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/pull/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.925261 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9lqvx_2047bcfa-42e4-4e81-b2c9-47f4a876ea84/manager/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.938837 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b9v4x_fcd15b84-585b-4984-9c1f-26a6c585ada4/manager/0.log" Jan 22 10:26:19 crc kubenswrapper[4892]: I0122 10:26:19.979309 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wkmzq_c9a77485-9340-433e-8bf6-cd47551438a9/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.275900 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-25z65_4f507c71-c9ab-4398-b25a-b6070d41f2b7/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.287557 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dcjs4_186e1123-d674-468b-91c1-92eb6bca4a30/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.348259 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vm28p_361a2cfd-62a4-40cc-b85c-7e81e6adb91d/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.360444 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-67mcr_f942aff3-65c5-4507-af71-0e4596abc4cf/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.390940 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-4ldkj_fd035f9e-2587-4286-85d9-db7c209970de/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.420618 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-dvlzw_928d4875-5da0-47ce-a68d-99fed2b7edce/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.644792 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-pkbln_8a19ffda-db08-44ec-bc17-d70c74f9552e/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.656926 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-sjml2_43ab3264-2c0d-44a8-ab85-66efc360bf67/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.681656 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr_c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea/manager/0.log" Jan 22 10:26:20 crc kubenswrapper[4892]: I0122 10:26:20.827274 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-698d6bb84b-sckbn_bf11bbca-62bd-4421-b0be-a62f87a6d600/operator/0.log" Jan 22 10:26:21 crc kubenswrapper[4892]: I0122 10:26:21.426092 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:26:21 crc kubenswrapper[4892]: E0122 10:26:21.426719 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.159007 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-788c8b99b5-cws6m_7b2bb8eb-1122-4141-a4ed-c3d316c8b821/manager/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.231924 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hq2gz_016ec7ec-1244-47ab-81ba-957ed4b83b4f/registry-server/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.291652 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9htzp_4ce3456e-dba6-498d-bf5a-aef2832489fe/manager/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.313229 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-hf9ft_e23d3dd6-bce9-496f-840b-0bbd3017826f/manager/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.335272 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hkmzg_7be69e64-d272-47f2-933a-4925c0aad02c/operator/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.363899 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gfcjl_f7dcb7b0-0580-4aff-8770-377761a44f88/manager/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.426952 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-2n9gl_062ff35c-ceb7-44b0-a2ef-1d79a14a444c/manager/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.437109 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2tb_be68c0da-a0d9-463c-be32-6191b85ae620/manager/0.log" Jan 22 10:26:22 crc kubenswrapper[4892]: I0122 10:26:22.449892 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-xq8jw_b6638ff5-13e6-44b1-8711-0c775882282f/manager/0.log" Jan 22 10:26:27 crc kubenswrapper[4892]: I0122 10:26:27.273079 4892 generic.go:334] "Generic (PLEG): container finished" podID="4cfbac31-0f60-48cc-91e8-7354a7166278" containerID="8cf7b1d8d282345d9ebdccddb5bec3eab402fe184108f3cd61e4deab96029ff9" exitCode=0 Jan 22 10:26:27 crc kubenswrapper[4892]: I0122 10:26:27.273245 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" event={"ID":"4cfbac31-0f60-48cc-91e8-7354a7166278","Type":"ContainerDied","Data":"8cf7b1d8d282345d9ebdccddb5bec3eab402fe184108f3cd61e4deab96029ff9"} Jan 22 10:26:27 crc kubenswrapper[4892]: I0122 10:26:27.386940 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wf6dw_3a01b910-5841-4f20-b270-c7040213ac8d/control-plane-machine-set-operator/0.log" Jan 22 10:26:27 crc kubenswrapper[4892]: I0122 10:26:27.403819 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/kube-rbac-proxy/0.log" Jan 22 10:26:27 crc kubenswrapper[4892]: I0122 10:26:27.416076 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/machine-api-operator/0.log" Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.395395 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.428873 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-g4qxb"] Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.438248 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-g4qxb"] Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.588998 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfbac31-0f60-48cc-91e8-7354a7166278-host\") pod \"4cfbac31-0f60-48cc-91e8-7354a7166278\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.589174 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cfbac31-0f60-48cc-91e8-7354a7166278-host" (OuterVolumeSpecName: "host") pod "4cfbac31-0f60-48cc-91e8-7354a7166278" (UID: "4cfbac31-0f60-48cc-91e8-7354a7166278"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.589491 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p22gj\" (UniqueName: \"kubernetes.io/projected/4cfbac31-0f60-48cc-91e8-7354a7166278-kube-api-access-p22gj\") pod \"4cfbac31-0f60-48cc-91e8-7354a7166278\" (UID: \"4cfbac31-0f60-48cc-91e8-7354a7166278\") " Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.589854 4892 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfbac31-0f60-48cc-91e8-7354a7166278-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.595041 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfbac31-0f60-48cc-91e8-7354a7166278-kube-api-access-p22gj" (OuterVolumeSpecName: "kube-api-access-p22gj") pod "4cfbac31-0f60-48cc-91e8-7354a7166278" (UID: "4cfbac31-0f60-48cc-91e8-7354a7166278"). InnerVolumeSpecName "kube-api-access-p22gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:26:28 crc kubenswrapper[4892]: I0122 10:26:28.691715 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p22gj\" (UniqueName: \"kubernetes.io/projected/4cfbac31-0f60-48cc-91e8-7354a7166278-kube-api-access-p22gj\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.292541 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4f6be665c7ca728db772673200dac020671515a97396a7be8a981e12ed419e" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.292620 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-g4qxb" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.429169 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfbac31-0f60-48cc-91e8-7354a7166278" path="/var/lib/kubelet/pods/4cfbac31-0f60-48cc-91e8-7354a7166278/volumes" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.647233 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-bjs8c"] Jan 22 10:26:29 crc kubenswrapper[4892]: E0122 10:26:29.648069 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfbac31-0f60-48cc-91e8-7354a7166278" containerName="container-00" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.648096 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfbac31-0f60-48cc-91e8-7354a7166278" containerName="container-00" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.648348 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfbac31-0f60-48cc-91e8-7354a7166278" containerName="container-00" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.648956 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.711052 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhcjd\" (UniqueName: \"kubernetes.io/projected/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-kube-api-access-rhcjd\") pod \"crc-debug-bjs8c\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.711139 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-host\") pod \"crc-debug-bjs8c\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.812631 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-host\") pod \"crc-debug-bjs8c\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.812782 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhcjd\" (UniqueName: \"kubernetes.io/projected/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-kube-api-access-rhcjd\") pod \"crc-debug-bjs8c\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.812777 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-host\") pod \"crc-debug-bjs8c\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.833760 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhcjd\" (UniqueName: \"kubernetes.io/projected/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-kube-api-access-rhcjd\") pod \"crc-debug-bjs8c\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:29 crc kubenswrapper[4892]: I0122 10:26:29.969492 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:30 crc kubenswrapper[4892]: I0122 10:26:30.301777 4892 generic.go:334] "Generic (PLEG): container finished" podID="3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" containerID="110c1b70fe8560c7dc8d2171735cbdd8672cf2d44a06cb2474ac2785be8235cc" exitCode=0 Jan 22 10:26:30 crc kubenswrapper[4892]: I0122 10:26:30.301891 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" event={"ID":"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f","Type":"ContainerDied","Data":"110c1b70fe8560c7dc8d2171735cbdd8672cf2d44a06cb2474ac2785be8235cc"} Jan 22 10:26:30 crc kubenswrapper[4892]: I0122 10:26:30.302137 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" event={"ID":"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f","Type":"ContainerStarted","Data":"c569c5a8b7ae1dceebacadf61b39020946a16294f2552535f11dd012a8da2cc7"} Jan 22 10:26:30 crc kubenswrapper[4892]: I0122 10:26:30.761000 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-bjs8c"] Jan 22 10:26:30 crc kubenswrapper[4892]: I0122 10:26:30.769798 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-bjs8c"] Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.439595 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.641393 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhcjd\" (UniqueName: \"kubernetes.io/projected/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-kube-api-access-rhcjd\") pod \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.641769 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-host\") pod \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\" (UID: \"3253a69d-1fc8-4085-b9f5-b71f1e8fba2f\") " Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.641942 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-host" (OuterVolumeSpecName: "host") pod "3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" (UID: "3253a69d-1fc8-4085-b9f5-b71f1e8fba2f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.642312 4892 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.660040 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-kube-api-access-rhcjd" (OuterVolumeSpecName: "kube-api-access-rhcjd") pod "3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" (UID: "3253a69d-1fc8-4085-b9f5-b71f1e8fba2f"). InnerVolumeSpecName "kube-api-access-rhcjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.745149 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhcjd\" (UniqueName: \"kubernetes.io/projected/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f-kube-api-access-rhcjd\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.924615 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-zpdz5"] Jan 22 10:26:31 crc kubenswrapper[4892]: E0122 10:26:31.925033 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" containerName="container-00" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.925049 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" containerName="container-00" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.925274 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" containerName="container-00" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.925839 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.949478 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60224567-88b8-4711-aaaf-6c3d9a73dc6c-host\") pod \"crc-debug-zpdz5\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:31 crc kubenswrapper[4892]: I0122 10:26:31.949677 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfll5\" (UniqueName: \"kubernetes.io/projected/60224567-88b8-4711-aaaf-6c3d9a73dc6c-kube-api-access-dfll5\") pod \"crc-debug-zpdz5\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.051045 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60224567-88b8-4711-aaaf-6c3d9a73dc6c-host\") pod \"crc-debug-zpdz5\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.051182 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfll5\" (UniqueName: \"kubernetes.io/projected/60224567-88b8-4711-aaaf-6c3d9a73dc6c-kube-api-access-dfll5\") pod \"crc-debug-zpdz5\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.051185 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60224567-88b8-4711-aaaf-6c3d9a73dc6c-host\") pod \"crc-debug-zpdz5\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.070705 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfll5\" (UniqueName: \"kubernetes.io/projected/60224567-88b8-4711-aaaf-6c3d9a73dc6c-kube-api-access-dfll5\") pod \"crc-debug-zpdz5\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.243435 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.321886 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" event={"ID":"60224567-88b8-4711-aaaf-6c3d9a73dc6c","Type":"ContainerStarted","Data":"c42f20859062df3064fbef2570d831594fd628a5c34cffda405176128c0a8fa9"} Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.324213 4892 scope.go:117] "RemoveContainer" containerID="110c1b70fe8560c7dc8d2171735cbdd8672cf2d44a06cb2474ac2785be8235cc" Jan 22 10:26:32 crc kubenswrapper[4892]: I0122 10:26:32.324273 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-bjs8c" Jan 22 10:26:33 crc kubenswrapper[4892]: I0122 10:26:33.344649 4892 generic.go:334] "Generic (PLEG): container finished" podID="60224567-88b8-4711-aaaf-6c3d9a73dc6c" containerID="ff3a765786ca765598251f18ea40690525b7d65e718e7f7ed663c7af51765692" exitCode=0 Jan 22 10:26:33 crc kubenswrapper[4892]: I0122 10:26:33.345748 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" event={"ID":"60224567-88b8-4711-aaaf-6c3d9a73dc6c","Type":"ContainerDied","Data":"ff3a765786ca765598251f18ea40690525b7d65e718e7f7ed663c7af51765692"} Jan 22 10:26:33 crc kubenswrapper[4892]: I0122 10:26:33.383648 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-zpdz5"] Jan 22 10:26:33 crc kubenswrapper[4892]: I0122 10:26:33.391062 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cnzvv/crc-debug-zpdz5"] Jan 22 10:26:33 crc kubenswrapper[4892]: I0122 10:26:33.429449 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3253a69d-1fc8-4085-b9f5-b71f1e8fba2f" path="/var/lib/kubelet/pods/3253a69d-1fc8-4085-b9f5-b71f1e8fba2f/volumes" Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.460870 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.493871 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfll5\" (UniqueName: \"kubernetes.io/projected/60224567-88b8-4711-aaaf-6c3d9a73dc6c-kube-api-access-dfll5\") pod \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.493953 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60224567-88b8-4711-aaaf-6c3d9a73dc6c-host\") pod \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\" (UID: \"60224567-88b8-4711-aaaf-6c3d9a73dc6c\") " Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.494388 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60224567-88b8-4711-aaaf-6c3d9a73dc6c-host" (OuterVolumeSpecName: "host") pod "60224567-88b8-4711-aaaf-6c3d9a73dc6c" (UID: "60224567-88b8-4711-aaaf-6c3d9a73dc6c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.499541 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60224567-88b8-4711-aaaf-6c3d9a73dc6c-kube-api-access-dfll5" (OuterVolumeSpecName: "kube-api-access-dfll5") pod "60224567-88b8-4711-aaaf-6c3d9a73dc6c" (UID: "60224567-88b8-4711-aaaf-6c3d9a73dc6c"). InnerVolumeSpecName "kube-api-access-dfll5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.596228 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfll5\" (UniqueName: \"kubernetes.io/projected/60224567-88b8-4711-aaaf-6c3d9a73dc6c-kube-api-access-dfll5\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:34 crc kubenswrapper[4892]: I0122 10:26:34.596277 4892 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60224567-88b8-4711-aaaf-6c3d9a73dc6c-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:35 crc kubenswrapper[4892]: I0122 10:26:35.361473 4892 scope.go:117] "RemoveContainer" containerID="ff3a765786ca765598251f18ea40690525b7d65e718e7f7ed663c7af51765692" Jan 22 10:26:35 crc kubenswrapper[4892]: I0122 10:26:35.361519 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/crc-debug-zpdz5" Jan 22 10:26:35 crc kubenswrapper[4892]: I0122 10:26:35.418778 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:26:35 crc kubenswrapper[4892]: E0122 10:26:35.419235 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:26:35 crc kubenswrapper[4892]: I0122 10:26:35.429824 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60224567-88b8-4711-aaaf-6c3d9a73dc6c" path="/var/lib/kubelet/pods/60224567-88b8-4711-aaaf-6c3d9a73dc6c/volumes" Jan 22 10:26:38 crc kubenswrapper[4892]: I0122 10:26:38.572653 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8w6r_5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0/cert-manager-controller/0.log" Jan 22 10:26:38 crc kubenswrapper[4892]: I0122 10:26:38.587045 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rzj7t_6035615e-d06d-45df-b927-9233155546ce/cert-manager-cainjector/0.log" Jan 22 10:26:38 crc kubenswrapper[4892]: I0122 10:26:38.594698 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rv9dl_fc56bdec-62b2-486e-84c5-363cc15c5cec/cert-manager-webhook/0.log" Jan 22 10:26:43 crc kubenswrapper[4892]: I0122 10:26:43.651445 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5g6g2_d0e47195-84b2-4249-8f2e-833525b47d1c/nmstate-console-plugin/0.log" Jan 22 10:26:43 crc kubenswrapper[4892]: I0122 10:26:43.669514 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c4k72_76d51902-a31d-4cfd-aa0a-de6c055c79fd/nmstate-handler/0.log" Jan 22 10:26:43 crc kubenswrapper[4892]: I0122 10:26:43.683647 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/nmstate-metrics/0.log" Jan 22 10:26:43 crc kubenswrapper[4892]: I0122 10:26:43.694744 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/kube-rbac-proxy/0.log" Jan 22 10:26:43 crc kubenswrapper[4892]: I0122 10:26:43.708436 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-blljf_ebf0d927-7aa3-4f75-b5be-7037df253175/nmstate-operator/0.log" Jan 22 10:26:43 crc kubenswrapper[4892]: I0122 10:26:43.718260 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t6lmb_21249177-5044-4f9b-a0dc-dcad499ec3ad/nmstate-webhook/0.log" Jan 22 10:26:48 crc kubenswrapper[4892]: I0122 10:26:48.419022 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:26:48 crc kubenswrapper[4892]: E0122 10:26:48.419799 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:26:54 crc kubenswrapper[4892]: I0122 10:26:54.193446 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/controller/0.log" Jan 22 10:26:54 crc kubenswrapper[4892]: I0122 10:26:54.199754 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/kube-rbac-proxy/0.log" Jan 22 10:26:54 crc kubenswrapper[4892]: I0122 10:26:54.221623 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/controller/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.723639 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.739807 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/reloader/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.749923 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr-metrics/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.760565 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.767330 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy-frr/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.775171 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-frr-files/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.786274 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-reloader/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.793441 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-metrics/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.804573 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-drtv6_667e6efb-6488-461d-8e5f-380e05c4956e/frr-k8s-webhook-server/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.844600 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6744fff56c-5c2wg_e2e7c48f-6e23-4156-b679-30f2d9735501/manager/0.log" Jan 22 10:26:55 crc kubenswrapper[4892]: I0122 10:26:55.853422 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bdbd58466-bwr22_b49a3e83-8e00-4934-8968-97d1905959d0/webhook-server/0.log" Jan 22 10:26:56 crc kubenswrapper[4892]: I0122 10:26:56.211364 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/speaker/0.log" Jan 22 10:26:56 crc kubenswrapper[4892]: I0122 10:26:56.219978 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/kube-rbac-proxy/0.log" Jan 22 10:26:59 crc kubenswrapper[4892]: I0122 10:26:59.816326 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8_fe89e20a-62fc-4d26-ae68-73810243a106/extract/0.log" Jan 22 10:26:59 crc kubenswrapper[4892]: I0122 10:26:59.827272 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8_fe89e20a-62fc-4d26-ae68-73810243a106/util/0.log" Jan 22 10:26:59 crc kubenswrapper[4892]: I0122 10:26:59.835226 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcd79m8_fe89e20a-62fc-4d26-ae68-73810243a106/pull/0.log" Jan 22 10:26:59 crc kubenswrapper[4892]: I0122 10:26:59.846650 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn_8c2da807-7b14-4384-bf1a-dcfad84a6a14/extract/0.log" Jan 22 10:26:59 crc kubenswrapper[4892]: I0122 10:26:59.854203 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn_8c2da807-7b14-4384-bf1a-dcfad84a6a14/util/0.log" Jan 22 10:26:59 crc kubenswrapper[4892]: I0122 10:26:59.862992 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ld2qn_8c2da807-7b14-4384-bf1a-dcfad84a6a14/pull/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.204651 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75cn7_0e4feacd-2fae-4242-81a6-2de47aca5dd7/registry-server/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.210945 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75cn7_0e4feacd-2fae-4242-81a6-2de47aca5dd7/extract-utilities/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.217890 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75cn7_0e4feacd-2fae-4242-81a6-2de47aca5dd7/extract-content/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.927097 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrbh2_49114a09-ac3a-4dbd-99f1-26543fbf5dcf/registry-server/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.938879 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrbh2_49114a09-ac3a-4dbd-99f1-26543fbf5dcf/extract-utilities/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.951338 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vrbh2_49114a09-ac3a-4dbd-99f1-26543fbf5dcf/extract-content/0.log" Jan 22 10:27:00 crc kubenswrapper[4892]: I0122 10:27:00.966190 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xt4nm_02e012df-582c-41ec-9c63-ff6dd7cc08c6/marketplace-operator/0.log" Jan 22 10:27:01 crc kubenswrapper[4892]: I0122 10:27:01.126476 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7pjdw_1c53bdc3-44ab-4be2-9f83-2d241776a337/registry-server/0.log" Jan 22 10:27:01 crc kubenswrapper[4892]: I0122 10:27:01.131989 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7pjdw_1c53bdc3-44ab-4be2-9f83-2d241776a337/extract-utilities/0.log" Jan 22 10:27:01 crc kubenswrapper[4892]: I0122 10:27:01.138580 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7pjdw_1c53bdc3-44ab-4be2-9f83-2d241776a337/extract-content/0.log" Jan 22 10:27:01 crc kubenswrapper[4892]: I0122 10:27:01.942078 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pf4pl_91b13dd5-7aad-496a-8138-9a9e638a0a01/registry-server/0.log" Jan 22 10:27:01 crc kubenswrapper[4892]: I0122 10:27:01.947199 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pf4pl_91b13dd5-7aad-496a-8138-9a9e638a0a01/extract-utilities/0.log" Jan 22 10:27:01 crc kubenswrapper[4892]: I0122 10:27:01.958392 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pf4pl_91b13dd5-7aad-496a-8138-9a9e638a0a01/extract-content/0.log" Jan 22 10:27:03 crc kubenswrapper[4892]: I0122 10:27:03.419178 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:27:03 crc kubenswrapper[4892]: E0122 10:27:03.419739 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:27:14 crc kubenswrapper[4892]: I0122 10:27:14.418380 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:27:14 crc kubenswrapper[4892]: E0122 10:27:14.419132 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:27:26 crc kubenswrapper[4892]: I0122 10:27:26.419044 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:27:26 crc kubenswrapper[4892]: E0122 10:27:26.419882 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:27:41 crc kubenswrapper[4892]: I0122 10:27:41.430540 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:27:41 crc kubenswrapper[4892]: E0122 10:27:41.431336 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:27:52 crc kubenswrapper[4892]: I0122 10:27:52.419433 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:27:52 crc kubenswrapper[4892]: E0122 10:27:52.420406 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:28:05 crc kubenswrapper[4892]: I0122 10:28:05.418579 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:28:05 crc kubenswrapper[4892]: E0122 10:28:05.419702 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:28:16 crc kubenswrapper[4892]: I0122 10:28:16.799921 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/controller/0.log" Jan 22 10:28:16 crc kubenswrapper[4892]: I0122 10:28:16.806123 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-szvst_e6301fe9-08d8-4bac-87e9-227fcc218129/kube-rbac-proxy/0.log" Jan 22 10:28:16 crc kubenswrapper[4892]: I0122 10:28:16.826068 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/controller/0.log" Jan 22 10:28:16 crc kubenswrapper[4892]: I0122 10:28:16.968554 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8w6r_5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0/cert-manager-controller/0.log" Jan 22 10:28:16 crc kubenswrapper[4892]: I0122 10:28:16.981505 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rzj7t_6035615e-d06d-45df-b927-9233155546ce/cert-manager-cainjector/0.log" Jan 22 10:28:16 crc kubenswrapper[4892]: I0122 10:28:16.993601 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rv9dl_fc56bdec-62b2-486e-84c5-363cc15c5cec/cert-manager-webhook/0.log" Jan 22 10:28:17 crc kubenswrapper[4892]: I0122 10:28:17.420160 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:28:17 crc kubenswrapper[4892]: E0122 10:28:17.420561 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.189995 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-mcfls_f7ec268a-c82e-455e-b4b9-d0f96998c015/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.304410 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-fnrjr_815dba39-30ed-4471-bf04-ecc573373016/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.324275 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sx9p8_c020c33f-f12c-47ce-9639-c0069dff8bc4/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.335531 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/extract/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.345740 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/util/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.353210 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/pull/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.461512 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.473461 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/reloader/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.481692 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/frr-metrics/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.487926 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9lqvx_2047bcfa-42e4-4e81-b2c9-47f4a876ea84/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.492148 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.498182 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b9v4x_fcd15b84-585b-4984-9c1f-26a6c585ada4/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.502787 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/kube-rbac-proxy-frr/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.511705 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-frr-files/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.519518 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-reloader/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.528423 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szhls_0f101c21-435f-4ede-8170-a8d399e50580/cp-metrics/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.530388 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wkmzq_c9a77485-9340-433e-8bf6-cd47551438a9/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.546635 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-drtv6_667e6efb-6488-461d-8e5f-380e05c4956e/frr-k8s-webhook-server/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.571191 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6744fff56c-5c2wg_e2e7c48f-6e23-4156-b679-30f2d9735501/manager/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.584195 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bdbd58466-bwr22_b49a3e83-8e00-4934-8968-97d1905959d0/webhook-server/0.log" Jan 22 10:28:18 crc kubenswrapper[4892]: I0122 10:28:18.984956 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-25z65_4f507c71-c9ab-4398-b25a-b6070d41f2b7/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:18.999966 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dcjs4_186e1123-d674-468b-91c1-92eb6bca4a30/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.095925 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vm28p_361a2cfd-62a4-40cc-b85c-7e81e6adb91d/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.096729 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/speaker/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.109564 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2bjk4_caa80b1d-b3d2-47d6-99e6-73420bc5f61d/kube-rbac-proxy/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.110953 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-67mcr_f942aff3-65c5-4507-af71-0e4596abc4cf/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.144476 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-4ldkj_fd035f9e-2587-4286-85d9-db7c209970de/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.200039 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-dvlzw_928d4875-5da0-47ce-a68d-99fed2b7edce/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.296126 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-pkbln_8a19ffda-db08-44ec-bc17-d70c74f9552e/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.314091 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-sjml2_43ab3264-2c0d-44a8-ab85-66efc360bf67/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.333156 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr_c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea/manager/0.log" Jan 22 10:28:19 crc kubenswrapper[4892]: I0122 10:28:19.459877 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-698d6bb84b-sckbn_bf11bbca-62bd-4421-b0be-a62f87a6d600/operator/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.362112 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8w6r_5c3dbc91-88ca-44dc-a4fd-fb147d8df3e0/cert-manager-controller/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.380979 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rzj7t_6035615e-d06d-45df-b927-9233155546ce/cert-manager-cainjector/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.390616 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rv9dl_fc56bdec-62b2-486e-84c5-363cc15c5cec/cert-manager-webhook/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.680681 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-788c8b99b5-cws6m_7b2bb8eb-1122-4141-a4ed-c3d316c8b821/manager/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.769094 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hq2gz_016ec7ec-1244-47ab-81ba-957ed4b83b4f/registry-server/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.829172 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9htzp_4ce3456e-dba6-498d-bf5a-aef2832489fe/manager/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.856892 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-hf9ft_e23d3dd6-bce9-496f-840b-0bbd3017826f/manager/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.880800 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hkmzg_7be69e64-d272-47f2-933a-4925c0aad02c/operator/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.907784 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gfcjl_f7dcb7b0-0580-4aff-8770-377761a44f88/manager/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.965320 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-2n9gl_062ff35c-ceb7-44b0-a2ef-1d79a14a444c/manager/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.979228 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2tb_be68c0da-a0d9-463c-be32-6191b85ae620/manager/0.log" Jan 22 10:28:20 crc kubenswrapper[4892]: I0122 10:28:20.995921 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-xq8jw_b6638ff5-13e6-44b1-8711-0c775882282f/manager/0.log" Jan 22 10:28:21 crc kubenswrapper[4892]: I0122 10:28:21.136628 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wf6dw_3a01b910-5841-4f20-b270-c7040213ac8d/control-plane-machine-set-operator/0.log" Jan 22 10:28:21 crc kubenswrapper[4892]: I0122 10:28:21.148345 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/kube-rbac-proxy/0.log" Jan 22 10:28:21 crc kubenswrapper[4892]: I0122 10:28:21.158123 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mhgmk_09f94488-4261-4a70-ab65-e85c42ba3313/machine-api-operator/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.026597 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-mcfls_f7ec268a-c82e-455e-b4b9-d0f96998c015/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.070534 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-fnrjr_815dba39-30ed-4471-bf04-ecc573373016/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.084707 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sx9p8_c020c33f-f12c-47ce-9639-c0069dff8bc4/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.095301 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/extract/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.101113 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/util/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.111918 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fa7ace019a3f96c6dacf22bf83e494fa656797b1a183803f1a18d1a3f9mfvsd_97ff7aaf-1e3d-4a24-b873-ba2ae47fccd8/pull/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.226063 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9lqvx_2047bcfa-42e4-4e81-b2c9-47f4a876ea84/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.241554 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-b9v4x_fcd15b84-585b-4984-9c1f-26a6c585ada4/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.278441 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wkmzq_c9a77485-9340-433e-8bf6-cd47551438a9/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.433692 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5g6g2_d0e47195-84b2-4249-8f2e-833525b47d1c/nmstate-console-plugin/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.469868 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c4k72_76d51902-a31d-4cfd-aa0a-de6c055c79fd/nmstate-handler/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.484221 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/nmstate-metrics/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.497075 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wcg8m_9911d829-131f-4c59-9268-c0165a5f1126/kube-rbac-proxy/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.516436 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-blljf_ebf0d927-7aa3-4f75-b5be-7037df253175/nmstate-operator/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.526059 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t6lmb_21249177-5044-4f9b-a0dc-dcad499ec3ad/nmstate-webhook/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.568228 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-25z65_4f507c71-c9ab-4398-b25a-b6070d41f2b7/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.581103 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-dcjs4_186e1123-d674-468b-91c1-92eb6bca4a30/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.644057 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vm28p_361a2cfd-62a4-40cc-b85c-7e81e6adb91d/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.656639 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-67mcr_f942aff3-65c5-4507-af71-0e4596abc4cf/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.691577 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-4ldkj_fd035f9e-2587-4286-85d9-db7c209970de/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.744426 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-dvlzw_928d4875-5da0-47ce-a68d-99fed2b7edce/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.835520 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-pkbln_8a19ffda-db08-44ec-bc17-d70c74f9552e/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.845896 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-sjml2_43ab3264-2c0d-44a8-ab85-66efc360bf67/manager/0.log" Jan 22 10:28:22 crc kubenswrapper[4892]: I0122 10:28:22.866027 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557wxcnr_c5d90f05-33b1-4b25-84b8-fc2a6e2c0cea/manager/0.log" Jan 22 10:28:23 crc kubenswrapper[4892]: I0122 10:28:23.034573 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-698d6bb84b-sckbn_bf11bbca-62bd-4421-b0be-a62f87a6d600/operator/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.315981 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-788c8b99b5-cws6m_7b2bb8eb-1122-4141-a4ed-c3d316c8b821/manager/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.390836 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hq2gz_016ec7ec-1244-47ab-81ba-957ed4b83b4f/registry-server/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.455396 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9htzp_4ce3456e-dba6-498d-bf5a-aef2832489fe/manager/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.478519 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-hf9ft_e23d3dd6-bce9-496f-840b-0bbd3017826f/manager/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.502546 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hkmzg_7be69e64-d272-47f2-933a-4925c0aad02c/operator/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.533373 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gfcjl_f7dcb7b0-0580-4aff-8770-377761a44f88/manager/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.595130 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-2n9gl_062ff35c-ceb7-44b0-a2ef-1d79a14a444c/manager/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.605250 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2tb_be68c0da-a0d9-463c-be32-6191b85ae620/manager/0.log" Jan 22 10:28:24 crc kubenswrapper[4892]: I0122 10:28:24.617126 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-xq8jw_b6638ff5-13e6-44b1-8711-0c775882282f/manager/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.245614 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/kube-multus-additional-cni-plugins/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.255202 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/egress-router-binary-copy/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.264475 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/cni-plugins/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.273502 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/bond-cni-plugin/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.283521 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/routeoverride-cni/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.292525 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/whereabouts-cni-bincopy/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.304364 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-7rbdp_afe12181-a266-4b88-b591-e1c130d15254/whereabouts-cni/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.340425 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-86wr5_23125b22-0965-46a8-a698-dc256f032b3c/multus-admission-controller/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.347986 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-86wr5_23125b22-0965-46a8-a698-dc256f032b3c/kube-rbac-proxy/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.396554 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/2.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.492070 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hz9vn_80ef00cc-97bb-4f08-ba72-3947ab29043f/kube-multus/3.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.536632 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5nnld_f7391f43-09a9-4333-8df2-72d4fdc02615/network-metrics-daemon/0.log" Jan 22 10:28:26 crc kubenswrapper[4892]: I0122 10:28:26.542880 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5nnld_f7391f43-09a9-4333-8df2-72d4fdc02615/kube-rbac-proxy/0.log" Jan 22 10:28:29 crc kubenswrapper[4892]: I0122 10:28:29.419863 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:28:29 crc kubenswrapper[4892]: E0122 10:28:29.420522 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:28:43 crc kubenswrapper[4892]: I0122 10:28:43.425693 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:28:43 crc kubenswrapper[4892]: E0122 10:28:43.426326 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:28:55 crc kubenswrapper[4892]: I0122 10:28:55.418779 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:28:55 crc kubenswrapper[4892]: E0122 10:28:55.419587 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:29:08 crc kubenswrapper[4892]: I0122 10:29:08.418762 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:29:08 crc kubenswrapper[4892]: E0122 10:29:08.419633 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.313120 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vm6pp"] Jan 22 10:29:09 crc kubenswrapper[4892]: E0122 10:29:09.317977 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60224567-88b8-4711-aaaf-6c3d9a73dc6c" containerName="container-00" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.318003 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="60224567-88b8-4711-aaaf-6c3d9a73dc6c" containerName="container-00" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.318192 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="60224567-88b8-4711-aaaf-6c3d9a73dc6c" containerName="container-00" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.319658 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.332064 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vm6pp"] Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.391938 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-utilities\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.392023 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlrnr\" (UniqueName: \"kubernetes.io/projected/0fb17d56-7f55-46f8-a079-1e615c9a822a-kube-api-access-xlrnr\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.392041 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-catalog-content\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.493932 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlrnr\" (UniqueName: \"kubernetes.io/projected/0fb17d56-7f55-46f8-a079-1e615c9a822a-kube-api-access-xlrnr\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.493967 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-catalog-content\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.494150 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-utilities\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.494573 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-catalog-content\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.494587 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-utilities\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.513953 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlrnr\" (UniqueName: \"kubernetes.io/projected/0fb17d56-7f55-46f8-a079-1e615c9a822a-kube-api-access-xlrnr\") pod \"redhat-marketplace-vm6pp\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:09 crc kubenswrapper[4892]: I0122 10:29:09.644140 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:10 crc kubenswrapper[4892]: I0122 10:29:10.164655 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vm6pp"] Jan 22 10:29:10 crc kubenswrapper[4892]: I0122 10:29:10.687544 4892 generic.go:334] "Generic (PLEG): container finished" podID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerID="5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481" exitCode=0 Jan 22 10:29:10 crc kubenswrapper[4892]: I0122 10:29:10.687587 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vm6pp" event={"ID":"0fb17d56-7f55-46f8-a079-1e615c9a822a","Type":"ContainerDied","Data":"5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481"} Jan 22 10:29:10 crc kubenswrapper[4892]: I0122 10:29:10.687616 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vm6pp" event={"ID":"0fb17d56-7f55-46f8-a079-1e615c9a822a","Type":"ContainerStarted","Data":"14cadb77ffd232ae49f7d45b5f068d0c66ab0191d311fed241d86bc432be71d5"} Jan 22 10:29:12 crc kubenswrapper[4892]: I0122 10:29:12.706642 4892 generic.go:334] "Generic (PLEG): container finished" podID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerID="4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c" exitCode=0 Jan 22 10:29:12 crc kubenswrapper[4892]: I0122 10:29:12.706716 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vm6pp" event={"ID":"0fb17d56-7f55-46f8-a079-1e615c9a822a","Type":"ContainerDied","Data":"4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c"} Jan 22 10:29:14 crc kubenswrapper[4892]: I0122 10:29:14.735240 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vm6pp" event={"ID":"0fb17d56-7f55-46f8-a079-1e615c9a822a","Type":"ContainerStarted","Data":"8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5"} Jan 22 10:29:19 crc kubenswrapper[4892]: I0122 10:29:19.645348 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:19 crc kubenswrapper[4892]: I0122 10:29:19.655746 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:19 crc kubenswrapper[4892]: I0122 10:29:19.708212 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:19 crc kubenswrapper[4892]: I0122 10:29:19.726996 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vm6pp" podStartSLOduration=7.951493159 podStartE2EDuration="10.72697493s" podCreationTimestamp="2026-01-22 10:29:09 +0000 UTC" firstStartedPulling="2026-01-22 10:29:10.689213858 +0000 UTC m=+4720.533292921" lastFinishedPulling="2026-01-22 10:29:13.464695629 +0000 UTC m=+4723.308774692" observedRunningTime="2026-01-22 10:29:14.778060432 +0000 UTC m=+4724.622139525" watchObservedRunningTime="2026-01-22 10:29:19.72697493 +0000 UTC m=+4729.571053993" Jan 22 10:29:19 crc kubenswrapper[4892]: I0122 10:29:19.824565 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:19 crc kubenswrapper[4892]: I0122 10:29:19.942399 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vm6pp"] Jan 22 10:29:21 crc kubenswrapper[4892]: I0122 10:29:21.424256 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:29:21 crc kubenswrapper[4892]: I0122 10:29:21.790189 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vm6pp" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="registry-server" containerID="cri-o://8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5" gracePeriod=2 Jan 22 10:29:21 crc kubenswrapper[4892]: I0122 10:29:21.790722 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"f9d5c23b086ac087bb5de3c649d5c5cf2f778740702a58b9407f4e57215bea88"} Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.263811 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.375550 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-utilities\") pod \"0fb17d56-7f55-46f8-a079-1e615c9a822a\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.375996 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlrnr\" (UniqueName: \"kubernetes.io/projected/0fb17d56-7f55-46f8-a079-1e615c9a822a-kube-api-access-xlrnr\") pod \"0fb17d56-7f55-46f8-a079-1e615c9a822a\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.376948 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-utilities" (OuterVolumeSpecName: "utilities") pod "0fb17d56-7f55-46f8-a079-1e615c9a822a" (UID: "0fb17d56-7f55-46f8-a079-1e615c9a822a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.377183 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-catalog-content\") pod \"0fb17d56-7f55-46f8-a079-1e615c9a822a\" (UID: \"0fb17d56-7f55-46f8-a079-1e615c9a822a\") " Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.378115 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.385979 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb17d56-7f55-46f8-a079-1e615c9a822a-kube-api-access-xlrnr" (OuterVolumeSpecName: "kube-api-access-xlrnr") pod "0fb17d56-7f55-46f8-a079-1e615c9a822a" (UID: "0fb17d56-7f55-46f8-a079-1e615c9a822a"). InnerVolumeSpecName "kube-api-access-xlrnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.407273 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fb17d56-7f55-46f8-a079-1e615c9a822a" (UID: "0fb17d56-7f55-46f8-a079-1e615c9a822a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.479718 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlrnr\" (UniqueName: \"kubernetes.io/projected/0fb17d56-7f55-46f8-a079-1e615c9a822a-kube-api-access-xlrnr\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.479769 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb17d56-7f55-46f8-a079-1e615c9a822a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.804170 4892 generic.go:334] "Generic (PLEG): container finished" podID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerID="8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5" exitCode=0 Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.804238 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vm6pp" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.804276 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vm6pp" event={"ID":"0fb17d56-7f55-46f8-a079-1e615c9a822a","Type":"ContainerDied","Data":"8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5"} Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.804621 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vm6pp" event={"ID":"0fb17d56-7f55-46f8-a079-1e615c9a822a","Type":"ContainerDied","Data":"14cadb77ffd232ae49f7d45b5f068d0c66ab0191d311fed241d86bc432be71d5"} Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.804652 4892 scope.go:117] "RemoveContainer" containerID="8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.829448 4892 scope.go:117] "RemoveContainer" containerID="4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.856235 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vm6pp"] Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.867150 4892 scope.go:117] "RemoveContainer" containerID="5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.869479 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vm6pp"] Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.905129 4892 scope.go:117] "RemoveContainer" containerID="8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5" Jan 22 10:29:22 crc kubenswrapper[4892]: E0122 10:29:22.905791 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5\": container with ID starting with 8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5 not found: ID does not exist" containerID="8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.905874 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5"} err="failed to get container status \"8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5\": rpc error: code = NotFound desc = could not find container \"8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5\": container with ID starting with 8598f07ac607d0acc2d66627d782339d8cf620dc4b2481e3c009fd89d5cd60b5 not found: ID does not exist" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.906043 4892 scope.go:117] "RemoveContainer" containerID="4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c" Jan 22 10:29:22 crc kubenswrapper[4892]: E0122 10:29:22.906531 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c\": container with ID starting with 4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c not found: ID does not exist" containerID="4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.906563 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c"} err="failed to get container status \"4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c\": rpc error: code = NotFound desc = could not find container \"4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c\": container with ID starting with 4a7659da51c9ce2dff41120e9a438e000a89b647797e617e359241a535cebb4c not found: ID does not exist" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.906583 4892 scope.go:117] "RemoveContainer" containerID="5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481" Jan 22 10:29:22 crc kubenswrapper[4892]: E0122 10:29:22.906852 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481\": container with ID starting with 5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481 not found: ID does not exist" containerID="5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481" Jan 22 10:29:22 crc kubenswrapper[4892]: I0122 10:29:22.906879 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481"} err="failed to get container status \"5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481\": rpc error: code = NotFound desc = could not find container \"5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481\": container with ID starting with 5b002d71203e2f7fee8834f1ccae130b614ab70cc99c37ef7796103f80149481 not found: ID does not exist" Jan 22 10:29:23 crc kubenswrapper[4892]: I0122 10:29:23.430188 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" path="/var/lib/kubelet/pods/0fb17d56-7f55-46f8-a079-1e615c9a822a/volumes" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.152398 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk"] Jan 22 10:30:00 crc kubenswrapper[4892]: E0122 10:30:00.153416 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="extract-content" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.153433 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="extract-content" Jan 22 10:30:00 crc kubenswrapper[4892]: E0122 10:30:00.153451 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="extract-utilities" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.153460 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="extract-utilities" Jan 22 10:30:00 crc kubenswrapper[4892]: E0122 10:30:00.153515 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="registry-server" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.153524 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="registry-server" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.153871 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb17d56-7f55-46f8-a079-1e615c9a822a" containerName="registry-server" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.154814 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.158043 4892 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.159188 4892 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.162614 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk"] Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.248761 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7wq\" (UniqueName: \"kubernetes.io/projected/543b8771-6f81-4ff6-9077-4501690e9d9e-kube-api-access-rd7wq\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.248897 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/543b8771-6f81-4ff6-9077-4501690e9d9e-secret-volume\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.248932 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543b8771-6f81-4ff6-9077-4501690e9d9e-config-volume\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.350957 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7wq\" (UniqueName: \"kubernetes.io/projected/543b8771-6f81-4ff6-9077-4501690e9d9e-kube-api-access-rd7wq\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.351030 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/543b8771-6f81-4ff6-9077-4501690e9d9e-secret-volume\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.351058 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543b8771-6f81-4ff6-9077-4501690e9d9e-config-volume\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.351966 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543b8771-6f81-4ff6-9077-4501690e9d9e-config-volume\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.357126 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/543b8771-6f81-4ff6-9077-4501690e9d9e-secret-volume\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.368572 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7wq\" (UniqueName: \"kubernetes.io/projected/543b8771-6f81-4ff6-9077-4501690e9d9e-kube-api-access-rd7wq\") pod \"collect-profiles-29484630-lnwlk\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.479995 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:00 crc kubenswrapper[4892]: I0122 10:30:00.973804 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk"] Jan 22 10:30:00 crc kubenswrapper[4892]: W0122 10:30:00.978852 4892 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod543b8771_6f81_4ff6_9077_4501690e9d9e.slice/crio-8bdacdee3ca9cd91a7b27540e2d498ebef70616feb4b35e31e1be5c76845252e WatchSource:0}: Error finding container 8bdacdee3ca9cd91a7b27540e2d498ebef70616feb4b35e31e1be5c76845252e: Status 404 returned error can't find the container with id 8bdacdee3ca9cd91a7b27540e2d498ebef70616feb4b35e31e1be5c76845252e Jan 22 10:30:01 crc kubenswrapper[4892]: I0122 10:30:01.166071 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" event={"ID":"543b8771-6f81-4ff6-9077-4501690e9d9e","Type":"ContainerStarted","Data":"8bdacdee3ca9cd91a7b27540e2d498ebef70616feb4b35e31e1be5c76845252e"} Jan 22 10:30:02 crc kubenswrapper[4892]: I0122 10:30:02.175977 4892 generic.go:334] "Generic (PLEG): container finished" podID="543b8771-6f81-4ff6-9077-4501690e9d9e" containerID="81421bbb6ed270115ec70a1bf30d3d1e679cdaa5cbca069241c23058a4d3117e" exitCode=0 Jan 22 10:30:02 crc kubenswrapper[4892]: I0122 10:30:02.176032 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" event={"ID":"543b8771-6f81-4ff6-9077-4501690e9d9e","Type":"ContainerDied","Data":"81421bbb6ed270115ec70a1bf30d3d1e679cdaa5cbca069241c23058a4d3117e"} Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.545969 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.640309 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd7wq\" (UniqueName: \"kubernetes.io/projected/543b8771-6f81-4ff6-9077-4501690e9d9e-kube-api-access-rd7wq\") pod \"543b8771-6f81-4ff6-9077-4501690e9d9e\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.640383 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543b8771-6f81-4ff6-9077-4501690e9d9e-config-volume\") pod \"543b8771-6f81-4ff6-9077-4501690e9d9e\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.640408 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/543b8771-6f81-4ff6-9077-4501690e9d9e-secret-volume\") pod \"543b8771-6f81-4ff6-9077-4501690e9d9e\" (UID: \"543b8771-6f81-4ff6-9077-4501690e9d9e\") " Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.641594 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543b8771-6f81-4ff6-9077-4501690e9d9e-config-volume" (OuterVolumeSpecName: "config-volume") pod "543b8771-6f81-4ff6-9077-4501690e9d9e" (UID: "543b8771-6f81-4ff6-9077-4501690e9d9e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.646661 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543b8771-6f81-4ff6-9077-4501690e9d9e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "543b8771-6f81-4ff6-9077-4501690e9d9e" (UID: "543b8771-6f81-4ff6-9077-4501690e9d9e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.653851 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543b8771-6f81-4ff6-9077-4501690e9d9e-kube-api-access-rd7wq" (OuterVolumeSpecName: "kube-api-access-rd7wq") pod "543b8771-6f81-4ff6-9077-4501690e9d9e" (UID: "543b8771-6f81-4ff6-9077-4501690e9d9e"). InnerVolumeSpecName "kube-api-access-rd7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.743727 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd7wq\" (UniqueName: \"kubernetes.io/projected/543b8771-6f81-4ff6-9077-4501690e9d9e-kube-api-access-rd7wq\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.743849 4892 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543b8771-6f81-4ff6-9077-4501690e9d9e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:03 crc kubenswrapper[4892]: I0122 10:30:03.743871 4892 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/543b8771-6f81-4ff6-9077-4501690e9d9e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:04 crc kubenswrapper[4892]: I0122 10:30:04.197838 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" Jan 22 10:30:04 crc kubenswrapper[4892]: I0122 10:30:04.197685 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-lnwlk" event={"ID":"543b8771-6f81-4ff6-9077-4501690e9d9e","Type":"ContainerDied","Data":"8bdacdee3ca9cd91a7b27540e2d498ebef70616feb4b35e31e1be5c76845252e"} Jan 22 10:30:04 crc kubenswrapper[4892]: I0122 10:30:04.206185 4892 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bdacdee3ca9cd91a7b27540e2d498ebef70616feb4b35e31e1be5c76845252e" Jan 22 10:30:04 crc kubenswrapper[4892]: I0122 10:30:04.631181 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc"] Jan 22 10:30:04 crc kubenswrapper[4892]: I0122 10:30:04.640408 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-fg5xc"] Jan 22 10:30:05 crc kubenswrapper[4892]: I0122 10:30:05.431356 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac6c730-e7b9-47a7-965b-0c6cac408873" path="/var/lib/kubelet/pods/aac6c730-e7b9-47a7-965b-0c6cac408873/volumes" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.388335 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zlzqp"] Jan 22 10:30:31 crc kubenswrapper[4892]: E0122 10:30:31.391689 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b8771-6f81-4ff6-9077-4501690e9d9e" containerName="collect-profiles" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.391746 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b8771-6f81-4ff6-9077-4501690e9d9e" containerName="collect-profiles" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.392236 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="543b8771-6f81-4ff6-9077-4501690e9d9e" containerName="collect-profiles" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.396318 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.402763 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zlzqp"] Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.526174 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-catalog-content\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.526217 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8f75\" (UniqueName: \"kubernetes.io/projected/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-kube-api-access-z8f75\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.526268 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-utilities\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.627771 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-catalog-content\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.627826 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8f75\" (UniqueName: \"kubernetes.io/projected/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-kube-api-access-z8f75\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.627887 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-utilities\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.628363 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-catalog-content\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.628401 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-utilities\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.648167 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8f75\" (UniqueName: \"kubernetes.io/projected/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-kube-api-access-z8f75\") pod \"certified-operators-zlzqp\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:31 crc kubenswrapper[4892]: I0122 10:30:31.721229 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:32 crc kubenswrapper[4892]: I0122 10:30:32.207788 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zlzqp"] Jan 22 10:30:32 crc kubenswrapper[4892]: I0122 10:30:32.429649 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlzqp" event={"ID":"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7","Type":"ContainerStarted","Data":"cd5a64b38fe14a97396a42d6e134e53e8ad20156a1bd1843f86f1c716d2ec284"} Jan 22 10:30:33 crc kubenswrapper[4892]: I0122 10:30:33.439422 4892 generic.go:334] "Generic (PLEG): container finished" podID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerID="8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2" exitCode=0 Jan 22 10:30:33 crc kubenswrapper[4892]: I0122 10:30:33.439467 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlzqp" event={"ID":"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7","Type":"ContainerDied","Data":"8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2"} Jan 22 10:30:33 crc kubenswrapper[4892]: I0122 10:30:33.441889 4892 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:30:35 crc kubenswrapper[4892]: I0122 10:30:35.458486 4892 generic.go:334] "Generic (PLEG): container finished" podID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerID="27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e" exitCode=0 Jan 22 10:30:35 crc kubenswrapper[4892]: I0122 10:30:35.458747 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlzqp" event={"ID":"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7","Type":"ContainerDied","Data":"27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e"} Jan 22 10:30:37 crc kubenswrapper[4892]: I0122 10:30:37.496716 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlzqp" event={"ID":"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7","Type":"ContainerStarted","Data":"f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c"} Jan 22 10:30:37 crc kubenswrapper[4892]: I0122 10:30:37.523581 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zlzqp" podStartSLOduration=3.563072813 podStartE2EDuration="6.523559725s" podCreationTimestamp="2026-01-22 10:30:31 +0000 UTC" firstStartedPulling="2026-01-22 10:30:33.441618968 +0000 UTC m=+4803.285698021" lastFinishedPulling="2026-01-22 10:30:36.40210587 +0000 UTC m=+4806.246184933" observedRunningTime="2026-01-22 10:30:37.513129784 +0000 UTC m=+4807.357208847" watchObservedRunningTime="2026-01-22 10:30:37.523559725 +0000 UTC m=+4807.367638788" Jan 22 10:30:41 crc kubenswrapper[4892]: I0122 10:30:41.722343 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:41 crc kubenswrapper[4892]: I0122 10:30:41.723809 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:41 crc kubenswrapper[4892]: I0122 10:30:41.770405 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:42 crc kubenswrapper[4892]: I0122 10:30:42.327007 4892 scope.go:117] "RemoveContainer" containerID="b40dd2f4c05051f4a785c8b4f7602f97173de5ddcf0ddd8bd90692934c5061ab" Jan 22 10:30:42 crc kubenswrapper[4892]: I0122 10:30:42.581148 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:42 crc kubenswrapper[4892]: I0122 10:30:42.632086 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zlzqp"] Jan 22 10:30:44 crc kubenswrapper[4892]: I0122 10:30:44.553487 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zlzqp" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="registry-server" containerID="cri-o://f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c" gracePeriod=2 Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.307763 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.391015 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-utilities\") pod \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.391314 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-catalog-content\") pod \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.391439 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8f75\" (UniqueName: \"kubernetes.io/projected/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-kube-api-access-z8f75\") pod \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\" (UID: \"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7\") " Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.392112 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-utilities" (OuterVolumeSpecName: "utilities") pod "9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" (UID: "9e4d871b-7db1-47aa-ac46-1f6c55ea34e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.392408 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.397705 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-kube-api-access-z8f75" (OuterVolumeSpecName: "kube-api-access-z8f75") pod "9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" (UID: "9e4d871b-7db1-47aa-ac46-1f6c55ea34e7"). InnerVolumeSpecName "kube-api-access-z8f75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.442081 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" (UID: "9e4d871b-7db1-47aa-ac46-1f6c55ea34e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.494861 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.494900 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8f75\" (UniqueName: \"kubernetes.io/projected/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7-kube-api-access-z8f75\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.562931 4892 generic.go:334] "Generic (PLEG): container finished" podID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerID="f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c" exitCode=0 Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.562967 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlzqp" event={"ID":"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7","Type":"ContainerDied","Data":"f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c"} Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.562999 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlzqp" event={"ID":"9e4d871b-7db1-47aa-ac46-1f6c55ea34e7","Type":"ContainerDied","Data":"cd5a64b38fe14a97396a42d6e134e53e8ad20156a1bd1843f86f1c716d2ec284"} Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.563022 4892 scope.go:117] "RemoveContainer" containerID="f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.564043 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlzqp" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.591213 4892 scope.go:117] "RemoveContainer" containerID="27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.613159 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zlzqp"] Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.621520 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zlzqp"] Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.623583 4892 scope.go:117] "RemoveContainer" containerID="8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.664595 4892 scope.go:117] "RemoveContainer" containerID="f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c" Jan 22 10:30:45 crc kubenswrapper[4892]: E0122 10:30:45.665033 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c\": container with ID starting with f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c not found: ID does not exist" containerID="f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.665065 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c"} err="failed to get container status \"f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c\": rpc error: code = NotFound desc = could not find container \"f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c\": container with ID starting with f38c452028a1e8ac4f8af275e9c43392f742ba22c214875e6275695711a6828c not found: ID does not exist" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.665085 4892 scope.go:117] "RemoveContainer" containerID="27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e" Jan 22 10:30:45 crc kubenswrapper[4892]: E0122 10:30:45.665303 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e\": container with ID starting with 27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e not found: ID does not exist" containerID="27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.665385 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e"} err="failed to get container status \"27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e\": rpc error: code = NotFound desc = could not find container \"27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e\": container with ID starting with 27f5a2fa258d14c287eb3c52dac10f482582fdf4a92fba958a6317eeca3a5f6e not found: ID does not exist" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.665451 4892 scope.go:117] "RemoveContainer" containerID="8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2" Jan 22 10:30:45 crc kubenswrapper[4892]: E0122 10:30:45.665724 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2\": container with ID starting with 8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2 not found: ID does not exist" containerID="8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2" Jan 22 10:30:45 crc kubenswrapper[4892]: I0122 10:30:45.665746 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2"} err="failed to get container status \"8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2\": rpc error: code = NotFound desc = could not find container \"8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2\": container with ID starting with 8e735f62fc8fd0feeb12383d04d2df74b4669c8ce047eb576383a141ffd8e8c2 not found: ID does not exist" Jan 22 10:30:47 crc kubenswrapper[4892]: I0122 10:30:47.432820 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" path="/var/lib/kubelet/pods/9e4d871b-7db1-47aa-ac46-1f6c55ea34e7/volumes" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.111071 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pdht9"] Jan 22 10:31:05 crc kubenswrapper[4892]: E0122 10:31:05.112027 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="registry-server" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.112044 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="registry-server" Jan 22 10:31:05 crc kubenswrapper[4892]: E0122 10:31:05.112073 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="extract-content" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.112081 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="extract-content" Jan 22 10:31:05 crc kubenswrapper[4892]: E0122 10:31:05.112116 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="extract-utilities" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.112125 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="extract-utilities" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.112348 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d871b-7db1-47aa-ac46-1f6c55ea34e7" containerName="registry-server" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.113873 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.130269 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdht9"] Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.173272 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msc8\" (UniqueName: \"kubernetes.io/projected/9d518312-d8b9-4d48-80ce-756a40f1cf58-kube-api-access-2msc8\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.173651 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-utilities\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.173846 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-catalog-content\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.275134 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-catalog-content\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.275271 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msc8\" (UniqueName: \"kubernetes.io/projected/9d518312-d8b9-4d48-80ce-756a40f1cf58-kube-api-access-2msc8\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.275354 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-utilities\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.275897 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-utilities\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.276071 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-catalog-content\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.297130 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msc8\" (UniqueName: \"kubernetes.io/projected/9d518312-d8b9-4d48-80ce-756a40f1cf58-kube-api-access-2msc8\") pod \"redhat-operators-pdht9\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.429772 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:05 crc kubenswrapper[4892]: I0122 10:31:05.966065 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdht9"] Jan 22 10:31:06 crc kubenswrapper[4892]: I0122 10:31:06.764574 4892 generic.go:334] "Generic (PLEG): container finished" podID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerID="fa668485e42ba5c413f2279501deb37d51d777cf939bd3c8a9993cee7171efa2" exitCode=0 Jan 22 10:31:06 crc kubenswrapper[4892]: I0122 10:31:06.764893 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdht9" event={"ID":"9d518312-d8b9-4d48-80ce-756a40f1cf58","Type":"ContainerDied","Data":"fa668485e42ba5c413f2279501deb37d51d777cf939bd3c8a9993cee7171efa2"} Jan 22 10:31:06 crc kubenswrapper[4892]: I0122 10:31:06.764957 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdht9" event={"ID":"9d518312-d8b9-4d48-80ce-756a40f1cf58","Type":"ContainerStarted","Data":"6c3ac4d3406d7a87a2bbdda1c183c7841ee09d9f4c78030322f74f2233ce0e18"} Jan 22 10:31:08 crc kubenswrapper[4892]: I0122 10:31:08.792513 4892 generic.go:334] "Generic (PLEG): container finished" podID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerID="abe3e6e4542372a884009e6e663457ba37d2f1c68a3dbac6239d1c7d1a15a4e5" exitCode=0 Jan 22 10:31:08 crc kubenswrapper[4892]: I0122 10:31:08.792615 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdht9" event={"ID":"9d518312-d8b9-4d48-80ce-756a40f1cf58","Type":"ContainerDied","Data":"abe3e6e4542372a884009e6e663457ba37d2f1c68a3dbac6239d1c7d1a15a4e5"} Jan 22 10:31:09 crc kubenswrapper[4892]: I0122 10:31:09.806435 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdht9" event={"ID":"9d518312-d8b9-4d48-80ce-756a40f1cf58","Type":"ContainerStarted","Data":"f81eb8b7e69a1edbe52357e1f3f06f4d0c882ecd5addfd0f61c6278655edaeb4"} Jan 22 10:31:09 crc kubenswrapper[4892]: I0122 10:31:09.830951 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pdht9" podStartSLOduration=2.2050067 podStartE2EDuration="4.830926868s" podCreationTimestamp="2026-01-22 10:31:05 +0000 UTC" firstStartedPulling="2026-01-22 10:31:06.770229404 +0000 UTC m=+4836.614308487" lastFinishedPulling="2026-01-22 10:31:09.396149582 +0000 UTC m=+4839.240228655" observedRunningTime="2026-01-22 10:31:09.825135643 +0000 UTC m=+4839.669214696" watchObservedRunningTime="2026-01-22 10:31:09.830926868 +0000 UTC m=+4839.675005931" Jan 22 10:31:15 crc kubenswrapper[4892]: I0122 10:31:15.449737 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:15 crc kubenswrapper[4892]: I0122 10:31:15.451037 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:15 crc kubenswrapper[4892]: I0122 10:31:15.914904 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:15 crc kubenswrapper[4892]: I0122 10:31:15.967587 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:16 crc kubenswrapper[4892]: I0122 10:31:16.151158 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdht9"] Jan 22 10:31:17 crc kubenswrapper[4892]: I0122 10:31:17.894319 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pdht9" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="registry-server" containerID="cri-o://f81eb8b7e69a1edbe52357e1f3f06f4d0c882ecd5addfd0f61c6278655edaeb4" gracePeriod=2 Jan 22 10:31:18 crc kubenswrapper[4892]: I0122 10:31:18.903708 4892 generic.go:334] "Generic (PLEG): container finished" podID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerID="f81eb8b7e69a1edbe52357e1f3f06f4d0c882ecd5addfd0f61c6278655edaeb4" exitCode=0 Jan 22 10:31:18 crc kubenswrapper[4892]: I0122 10:31:18.903789 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdht9" event={"ID":"9d518312-d8b9-4d48-80ce-756a40f1cf58","Type":"ContainerDied","Data":"f81eb8b7e69a1edbe52357e1f3f06f4d0c882ecd5addfd0f61c6278655edaeb4"} Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.269841 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.464655 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-catalog-content\") pod \"9d518312-d8b9-4d48-80ce-756a40f1cf58\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.465152 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-utilities\") pod \"9d518312-d8b9-4d48-80ce-756a40f1cf58\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.466595 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-utilities" (OuterVolumeSpecName: "utilities") pod "9d518312-d8b9-4d48-80ce-756a40f1cf58" (UID: "9d518312-d8b9-4d48-80ce-756a40f1cf58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.466680 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2msc8\" (UniqueName: \"kubernetes.io/projected/9d518312-d8b9-4d48-80ce-756a40f1cf58-kube-api-access-2msc8\") pod \"9d518312-d8b9-4d48-80ce-756a40f1cf58\" (UID: \"9d518312-d8b9-4d48-80ce-756a40f1cf58\") " Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.468152 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.470765 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d518312-d8b9-4d48-80ce-756a40f1cf58-kube-api-access-2msc8" (OuterVolumeSpecName: "kube-api-access-2msc8") pod "9d518312-d8b9-4d48-80ce-756a40f1cf58" (UID: "9d518312-d8b9-4d48-80ce-756a40f1cf58"). InnerVolumeSpecName "kube-api-access-2msc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.570599 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2msc8\" (UniqueName: \"kubernetes.io/projected/9d518312-d8b9-4d48-80ce-756a40f1cf58-kube-api-access-2msc8\") on node \"crc\" DevicePath \"\"" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.590171 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d518312-d8b9-4d48-80ce-756a40f1cf58" (UID: "9d518312-d8b9-4d48-80ce-756a40f1cf58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.672098 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d518312-d8b9-4d48-80ce-756a40f1cf58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.916702 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdht9" event={"ID":"9d518312-d8b9-4d48-80ce-756a40f1cf58","Type":"ContainerDied","Data":"6c3ac4d3406d7a87a2bbdda1c183c7841ee09d9f4c78030322f74f2233ce0e18"} Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.916778 4892 scope.go:117] "RemoveContainer" containerID="f81eb8b7e69a1edbe52357e1f3f06f4d0c882ecd5addfd0f61c6278655edaeb4" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.916881 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdht9" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.964641 4892 scope.go:117] "RemoveContainer" containerID="abe3e6e4542372a884009e6e663457ba37d2f1c68a3dbac6239d1c7d1a15a4e5" Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.965472 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdht9"] Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.974803 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pdht9"] Jan 22 10:31:19 crc kubenswrapper[4892]: I0122 10:31:19.995147 4892 scope.go:117] "RemoveContainer" containerID="fa668485e42ba5c413f2279501deb37d51d777cf939bd3c8a9993cee7171efa2" Jan 22 10:31:21 crc kubenswrapper[4892]: I0122 10:31:21.428336 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" path="/var/lib/kubelet/pods/9d518312-d8b9-4d48-80ce-756a40f1cf58/volumes" Jan 22 10:31:46 crc kubenswrapper[4892]: I0122 10:31:46.322940 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:31:46 crc kubenswrapper[4892]: I0122 10:31:46.323511 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:32:16 crc kubenswrapper[4892]: I0122 10:32:16.323034 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:32:16 crc kubenswrapper[4892]: I0122 10:32:16.323521 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:32:42 crc kubenswrapper[4892]: I0122 10:32:42.472139 4892 scope.go:117] "RemoveContainer" containerID="8cf7b1d8d282345d9ebdccddb5bec3eab402fe184108f3cd61e4deab96029ff9" Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.322897 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.323501 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.323553 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.324431 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9d5c23b086ac087bb5de3c649d5c5cf2f778740702a58b9407f4e57215bea88"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.324558 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://f9d5c23b086ac087bb5de3c649d5c5cf2f778740702a58b9407f4e57215bea88" gracePeriod=600 Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.672442 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="f9d5c23b086ac087bb5de3c649d5c5cf2f778740702a58b9407f4e57215bea88" exitCode=0 Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.672625 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"f9d5c23b086ac087bb5de3c649d5c5cf2f778740702a58b9407f4e57215bea88"} Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.673046 4892 scope.go:117] "RemoveContainer" containerID="a2823515206b3fbd88435a728042268cd85e4c8af005a094c2894a1d295d0fe6" Jan 22 10:32:46 crc kubenswrapper[4892]: I0122 10:32:46.674161 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerStarted","Data":"4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5"} Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.764575 4892 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsl9d"] Jan 22 10:34:38 crc kubenswrapper[4892]: E0122 10:34:38.765931 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="registry-server" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.765967 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="registry-server" Jan 22 10:34:38 crc kubenswrapper[4892]: E0122 10:34:38.766018 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="extract-content" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.766038 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="extract-content" Jan 22 10:34:38 crc kubenswrapper[4892]: E0122 10:34:38.766077 4892 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="extract-utilities" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.766092 4892 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="extract-utilities" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.766541 4892 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d518312-d8b9-4d48-80ce-756a40f1cf58" containerName="registry-server" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.771861 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.773366 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsl9d"] Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.922132 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-catalog-content\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.922373 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-utilities\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:38 crc kubenswrapper[4892]: I0122 10:34:38.922746 4892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fz4n\" (UniqueName: \"kubernetes.io/projected/48218754-c4cd-4f25-a82a-cce90855455b-kube-api-access-4fz4n\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.025082 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-utilities\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.025243 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fz4n\" (UniqueName: \"kubernetes.io/projected/48218754-c4cd-4f25-a82a-cce90855455b-kube-api-access-4fz4n\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.025637 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-utilities\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.025890 4892 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-catalog-content\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.026382 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-catalog-content\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.049144 4892 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fz4n\" (UniqueName: \"kubernetes.io/projected/48218754-c4cd-4f25-a82a-cce90855455b-kube-api-access-4fz4n\") pod \"community-operators-nsl9d\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.103782 4892 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.690744 4892 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsl9d"] Jan 22 10:34:39 crc kubenswrapper[4892]: I0122 10:34:39.705447 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerStarted","Data":"3de27144b4c01990f3bd55600bc8008c210c5aa6d4288c8c9e9779e2c6cc72da"} Jan 22 10:34:40 crc kubenswrapper[4892]: I0122 10:34:40.718445 4892 generic.go:334] "Generic (PLEG): container finished" podID="48218754-c4cd-4f25-a82a-cce90855455b" containerID="d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e" exitCode=0 Jan 22 10:34:40 crc kubenswrapper[4892]: I0122 10:34:40.718701 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerDied","Data":"d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e"} Jan 22 10:34:41 crc kubenswrapper[4892]: I0122 10:34:41.730265 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerStarted","Data":"597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98"} Jan 22 10:34:42 crc kubenswrapper[4892]: I0122 10:34:42.747388 4892 generic.go:334] "Generic (PLEG): container finished" podID="48218754-c4cd-4f25-a82a-cce90855455b" containerID="597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98" exitCode=0 Jan 22 10:34:42 crc kubenswrapper[4892]: I0122 10:34:42.747450 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerDied","Data":"597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98"} Jan 22 10:34:43 crc kubenswrapper[4892]: I0122 10:34:43.783127 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerStarted","Data":"e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2"} Jan 22 10:34:43 crc kubenswrapper[4892]: I0122 10:34:43.805084 4892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsl9d" podStartSLOduration=3.214458297 podStartE2EDuration="5.805067309s" podCreationTimestamp="2026-01-22 10:34:38 +0000 UTC" firstStartedPulling="2026-01-22 10:34:40.720297313 +0000 UTC m=+5050.564376376" lastFinishedPulling="2026-01-22 10:34:43.310906315 +0000 UTC m=+5053.154985388" observedRunningTime="2026-01-22 10:34:43.804742181 +0000 UTC m=+5053.648821294" watchObservedRunningTime="2026-01-22 10:34:43.805067309 +0000 UTC m=+5053.649146372" Jan 22 10:34:46 crc kubenswrapper[4892]: I0122 10:34:46.323730 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:34:46 crc kubenswrapper[4892]: I0122 10:34:46.324878 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:34:49 crc kubenswrapper[4892]: I0122 10:34:49.104166 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:49 crc kubenswrapper[4892]: I0122 10:34:49.104410 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:49 crc kubenswrapper[4892]: I0122 10:34:49.169410 4892 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:49 crc kubenswrapper[4892]: I0122 10:34:49.887029 4892 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:49 crc kubenswrapper[4892]: I0122 10:34:49.942020 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsl9d"] Jan 22 10:34:51 crc kubenswrapper[4892]: I0122 10:34:51.856353 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nsl9d" podUID="48218754-c4cd-4f25-a82a-cce90855455b" containerName="registry-server" containerID="cri-o://e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2" gracePeriod=2 Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.593126 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.738387 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-utilities\") pod \"48218754-c4cd-4f25-a82a-cce90855455b\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.738482 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-catalog-content\") pod \"48218754-c4cd-4f25-a82a-cce90855455b\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.738513 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fz4n\" (UniqueName: \"kubernetes.io/projected/48218754-c4cd-4f25-a82a-cce90855455b-kube-api-access-4fz4n\") pod \"48218754-c4cd-4f25-a82a-cce90855455b\" (UID: \"48218754-c4cd-4f25-a82a-cce90855455b\") " Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.744324 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-utilities" (OuterVolumeSpecName: "utilities") pod "48218754-c4cd-4f25-a82a-cce90855455b" (UID: "48218754-c4cd-4f25-a82a-cce90855455b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.759963 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48218754-c4cd-4f25-a82a-cce90855455b-kube-api-access-4fz4n" (OuterVolumeSpecName: "kube-api-access-4fz4n") pod "48218754-c4cd-4f25-a82a-cce90855455b" (UID: "48218754-c4cd-4f25-a82a-cce90855455b"). InnerVolumeSpecName "kube-api-access-4fz4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.819084 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48218754-c4cd-4f25-a82a-cce90855455b" (UID: "48218754-c4cd-4f25-a82a-cce90855455b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.842024 4892 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.842070 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fz4n\" (UniqueName: \"kubernetes.io/projected/48218754-c4cd-4f25-a82a-cce90855455b-kube-api-access-4fz4n\") on node \"crc\" DevicePath \"\"" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.842088 4892 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48218754-c4cd-4f25-a82a-cce90855455b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.870294 4892 generic.go:334] "Generic (PLEG): container finished" podID="48218754-c4cd-4f25-a82a-cce90855455b" containerID="e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2" exitCode=0 Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.870344 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerDied","Data":"e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2"} Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.870371 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsl9d" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.870387 4892 scope.go:117] "RemoveContainer" containerID="e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.870375 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsl9d" event={"ID":"48218754-c4cd-4f25-a82a-cce90855455b","Type":"ContainerDied","Data":"3de27144b4c01990f3bd55600bc8008c210c5aa6d4288c8c9e9779e2c6cc72da"} Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.891589 4892 scope.go:117] "RemoveContainer" containerID="597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.936878 4892 scope.go:117] "RemoveContainer" containerID="d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.944341 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsl9d"] Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.955026 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nsl9d"] Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.971646 4892 scope.go:117] "RemoveContainer" containerID="e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2" Jan 22 10:34:52 crc kubenswrapper[4892]: E0122 10:34:52.972370 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2\": container with ID starting with e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2 not found: ID does not exist" containerID="e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.972452 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2"} err="failed to get container status \"e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2\": rpc error: code = NotFound desc = could not find container \"e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2\": container with ID starting with e2990bfa54c8a20759a09841b748a14dd481885eacf47e38877b9dd37057fcc2 not found: ID does not exist" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.972504 4892 scope.go:117] "RemoveContainer" containerID="597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98" Jan 22 10:34:52 crc kubenswrapper[4892]: E0122 10:34:52.972947 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98\": container with ID starting with 597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98 not found: ID does not exist" containerID="597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.972984 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98"} err="failed to get container status \"597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98\": rpc error: code = NotFound desc = could not find container \"597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98\": container with ID starting with 597a718f4542b024a827e74f2cdff9db100026caa196562d2c21c60a56759d98 not found: ID does not exist" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.973006 4892 scope.go:117] "RemoveContainer" containerID="d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e" Jan 22 10:34:52 crc kubenswrapper[4892]: E0122 10:34:52.973380 4892 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e\": container with ID starting with d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e not found: ID does not exist" containerID="d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e" Jan 22 10:34:52 crc kubenswrapper[4892]: I0122 10:34:52.973461 4892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e"} err="failed to get container status \"d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e\": rpc error: code = NotFound desc = could not find container \"d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e\": container with ID starting with d2d3783ccdb38804d13c90a9a2b27ef5981b58e91902b9c40a13b716b5ad291e not found: ID does not exist" Jan 22 10:34:53 crc kubenswrapper[4892]: I0122 10:34:53.431555 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48218754-c4cd-4f25-a82a-cce90855455b" path="/var/lib/kubelet/pods/48218754-c4cd-4f25-a82a-cce90855455b/volumes" Jan 22 10:35:16 crc kubenswrapper[4892]: I0122 10:35:16.323325 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:35:16 crc kubenswrapper[4892]: I0122 10:35:16.323844 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:35:46 crc kubenswrapper[4892]: I0122 10:35:46.323999 4892 patch_prober.go:28] interesting pod/machine-config-daemon-w87tf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:35:46 crc kubenswrapper[4892]: I0122 10:35:46.324979 4892 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:35:46 crc kubenswrapper[4892]: I0122 10:35:46.325043 4892 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" Jan 22 10:35:46 crc kubenswrapper[4892]: I0122 10:35:46.326485 4892 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5"} pod="openshift-machine-config-operator/machine-config-daemon-w87tf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:35:46 crc kubenswrapper[4892]: I0122 10:35:46.326689 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" containerName="machine-config-daemon" containerID="cri-o://4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" gracePeriod=600 Jan 22 10:35:46 crc kubenswrapper[4892]: E0122 10:35:46.485415 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:35:47 crc kubenswrapper[4892]: I0122 10:35:47.344514 4892 generic.go:334] "Generic (PLEG): container finished" podID="4765e554-3060-4876-90fe-5e054619d7a1" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" exitCode=0 Jan 22 10:35:47 crc kubenswrapper[4892]: I0122 10:35:47.344561 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" event={"ID":"4765e554-3060-4876-90fe-5e054619d7a1","Type":"ContainerDied","Data":"4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5"} Jan 22 10:35:47 crc kubenswrapper[4892]: I0122 10:35:47.344593 4892 scope.go:117] "RemoveContainer" containerID="f9d5c23b086ac087bb5de3c649d5c5cf2f778740702a58b9407f4e57215bea88" Jan 22 10:35:47 crc kubenswrapper[4892]: I0122 10:35:47.345229 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:35:47 crc kubenswrapper[4892]: E0122 10:35:47.345604 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:36:00 crc kubenswrapper[4892]: I0122 10:36:00.419004 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:36:00 crc kubenswrapper[4892]: E0122 10:36:00.419745 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:36:13 crc kubenswrapper[4892]: I0122 10:36:13.420532 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:36:13 crc kubenswrapper[4892]: E0122 10:36:13.421806 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:36:24 crc kubenswrapper[4892]: I0122 10:36:24.418475 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:36:24 crc kubenswrapper[4892]: E0122 10:36:24.419385 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:36:25 crc kubenswrapper[4892]: I0122 10:36:25.703865 4892 generic.go:334] "Generic (PLEG): container finished" podID="c55dcac5-e0d5-4593-9edb-eb5f847f8d47" containerID="077ce771e2d474061e17ea009a8159e383a769bdaa861a3130e9e49df88c75f8" exitCode=0 Jan 22 10:36:25 crc kubenswrapper[4892]: I0122 10:36:25.703924 4892 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" event={"ID":"c55dcac5-e0d5-4593-9edb-eb5f847f8d47","Type":"ContainerDied","Data":"077ce771e2d474061e17ea009a8159e383a769bdaa861a3130e9e49df88c75f8"} Jan 22 10:36:25 crc kubenswrapper[4892]: I0122 10:36:25.704892 4892 scope.go:117] "RemoveContainer" containerID="077ce771e2d474061e17ea009a8159e383a769bdaa861a3130e9e49df88c75f8" Jan 22 10:36:26 crc kubenswrapper[4892]: I0122 10:36:26.076668 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cnzvv_must-gather-7lf9l_c55dcac5-e0d5-4593-9edb-eb5f847f8d47/gather/0.log" Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.434424 4892 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cnzvv/must-gather-7lf9l"] Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.439836 4892 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" podUID="c55dcac5-e0d5-4593-9edb-eb5f847f8d47" containerName="copy" containerID="cri-o://82de0404811546e3726641b9f35c0ed7b004346161307e870d6d006f2eaaa6e6" gracePeriod=2 Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.451075 4892 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cnzvv/must-gather-7lf9l"] Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.791239 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cnzvv_must-gather-7lf9l_c55dcac5-e0d5-4593-9edb-eb5f847f8d47/copy/0.log" Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.794114 4892 generic.go:334] "Generic (PLEG): container finished" podID="c55dcac5-e0d5-4593-9edb-eb5f847f8d47" containerID="82de0404811546e3726641b9f35c0ed7b004346161307e870d6d006f2eaaa6e6" exitCode=143 Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.930115 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cnzvv_must-gather-7lf9l_c55dcac5-e0d5-4593-9edb-eb5f847f8d47/copy/0.log" Jan 22 10:36:34 crc kubenswrapper[4892]: I0122 10:36:34.930774 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.022776 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-must-gather-output\") pod \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.022897 4892 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfjh\" (UniqueName: \"kubernetes.io/projected/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-kube-api-access-hlfjh\") pod \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\" (UID: \"c55dcac5-e0d5-4593-9edb-eb5f847f8d47\") " Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.035625 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-kube-api-access-hlfjh" (OuterVolumeSpecName: "kube-api-access-hlfjh") pod "c55dcac5-e0d5-4593-9edb-eb5f847f8d47" (UID: "c55dcac5-e0d5-4593-9edb-eb5f847f8d47"). InnerVolumeSpecName "kube-api-access-hlfjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.126410 4892 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfjh\" (UniqueName: \"kubernetes.io/projected/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-kube-api-access-hlfjh\") on node \"crc\" DevicePath \"\"" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.209110 4892 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c55dcac5-e0d5-4593-9edb-eb5f847f8d47" (UID: "c55dcac5-e0d5-4593-9edb-eb5f847f8d47"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.228935 4892 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c55dcac5-e0d5-4593-9edb-eb5f847f8d47-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.430211 4892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55dcac5-e0d5-4593-9edb-eb5f847f8d47" path="/var/lib/kubelet/pods/c55dcac5-e0d5-4593-9edb-eb5f847f8d47/volumes" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.808023 4892 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cnzvv_must-gather-7lf9l_c55dcac5-e0d5-4593-9edb-eb5f847f8d47/copy/0.log" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.808813 4892 scope.go:117] "RemoveContainer" containerID="82de0404811546e3726641b9f35c0ed7b004346161307e870d6d006f2eaaa6e6" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.808958 4892 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cnzvv/must-gather-7lf9l" Jan 22 10:36:35 crc kubenswrapper[4892]: I0122 10:36:35.837155 4892 scope.go:117] "RemoveContainer" containerID="077ce771e2d474061e17ea009a8159e383a769bdaa861a3130e9e49df88c75f8" Jan 22 10:36:37 crc kubenswrapper[4892]: I0122 10:36:37.418555 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:36:37 crc kubenswrapper[4892]: E0122 10:36:37.419252 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:36:49 crc kubenswrapper[4892]: I0122 10:36:49.418835 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:36:49 crc kubenswrapper[4892]: E0122 10:36:49.419873 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:37:01 crc kubenswrapper[4892]: I0122 10:37:01.429497 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:37:01 crc kubenswrapper[4892]: E0122 10:37:01.430255 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:37:13 crc kubenswrapper[4892]: I0122 10:37:13.421158 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:37:13 crc kubenswrapper[4892]: E0122 10:37:13.422031 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:37:26 crc kubenswrapper[4892]: I0122 10:37:26.418873 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:37:26 crc kubenswrapper[4892]: E0122 10:37:26.419694 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:37:40 crc kubenswrapper[4892]: I0122 10:37:40.418823 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:37:40 crc kubenswrapper[4892]: E0122 10:37:40.419537 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1" Jan 22 10:37:51 crc kubenswrapper[4892]: I0122 10:37:51.419337 4892 scope.go:117] "RemoveContainer" containerID="4e336036eb0343bb31d145cc96d451b01e94738a07e2e000d031a472b0788da5" Jan 22 10:37:51 crc kubenswrapper[4892]: E0122 10:37:51.422835 4892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w87tf_openshift-machine-config-operator(4765e554-3060-4876-90fe-5e054619d7a1)\"" pod="openshift-machine-config-operator/machine-config-daemon-w87tf" podUID="4765e554-3060-4876-90fe-5e054619d7a1"